00:00:00.001 Started by upstream project "autotest-per-patch" build number 122826 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.025 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.026 The recommended git tool is: git 00:00:00.026 using credential 00000000-0000-0000-0000-000000000002 00:00:00.027 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.044 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.066 Using shallow fetch with depth 1 00:00:00.066 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.066 > git --version # timeout=10 00:00:00.079 > git --version # 'git version 2.39.2' 00:00:00.079 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.080 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.080 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.621 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.629 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.639 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:02.639 > git config core.sparsecheckout # timeout=10 00:00:02.649 > git read-tree -mu HEAD # timeout=10 00:00:02.664 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:02.679 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:02.679 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:02.759 [Pipeline] Start of Pipeline 00:00:02.771 [Pipeline] library 00:00:02.772 Loading library shm_lib@master 00:00:02.772 Library shm_lib@master is cached. Copying from home. 00:00:02.789 [Pipeline] node 00:00:02.794 Running on VM-host-SM17 in /var/jenkins/workspace/centos7-vg-autotest 00:00:02.801 [Pipeline] { 00:00:02.812 [Pipeline] catchError 00:00:02.814 [Pipeline] { 00:00:02.827 [Pipeline] wrap 00:00:02.835 [Pipeline] { 00:00:02.842 [Pipeline] stage 00:00:02.843 [Pipeline] { (Prologue) 00:00:02.854 [Pipeline] echo 00:00:02.856 Node: VM-host-SM17 00:00:02.861 [Pipeline] cleanWs 00:00:02.869 [WS-CLEANUP] Deleting project workspace... 00:00:02.869 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.876 [WS-CLEANUP] done 00:00:03.015 [Pipeline] setCustomBuildProperty 00:00:03.091 [Pipeline] nodesByLabel 00:00:03.092 Found a total of 1 nodes with the 'sorcerer' label 00:00:03.105 [Pipeline] httpRequest 00:00:03.109 HttpMethod: GET 00:00:03.109 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:03.111 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:03.113 Response Code: HTTP/1.1 200 OK 00:00:03.113 Success: Status code 200 is in the accepted range: 200,404 00:00:03.113 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:03.252 [Pipeline] sh 00:00:03.529 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:03.545 [Pipeline] httpRequest 00:00:03.549 HttpMethod: GET 00:00:03.550 URL: http://10.211.164.101/packages/spdk_e8841656d9b2d1735733cab2c213856129bb10bb.tar.gz 00:00:03.550 Sending request to url: http://10.211.164.101/packages/spdk_e8841656d9b2d1735733cab2c213856129bb10bb.tar.gz 00:00:03.551 Response Code: HTTP/1.1 200 OK 00:00:03.552 Success: Status code 200 is in the accepted range: 200,404 00:00:03.552 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/spdk_e8841656d9b2d1735733cab2c213856129bb10bb.tar.gz 00:00:39.702 [Pipeline] sh 00:00:39.981 + tar --no-same-owner -xf spdk_e8841656d9b2d1735733cab2c213856129bb10bb.tar.gz 00:00:43.279 [Pipeline] sh 00:00:43.556 + git -C spdk log --oneline -n5 00:00:43.557 e8841656d nvmf: add nvmf_host_free() 00:00:43.557 e53d15a2a nvmf/tcp: flush sockets when removing from a sock group 00:00:43.557 5b83ef1c4 nvmf/auth: Diffie-Hellman exchange support 00:00:43.557 5c45cee21 nvmf/auth: add nvmf_auth_qpair_cleanup() 00:00:43.557 519ecd617 nvme/auth: make DH functions public 00:00:43.573 [Pipeline] writeFile 00:00:43.587 [Pipeline] sh 00:00:43.865 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:43.876 [Pipeline] sh 00:00:44.158 + cat autorun-spdk.conf 00:00:44.158 SPDK_TEST_UNITTEST=1 00:00:44.158 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.158 SPDK_TEST_BLOCKDEV=1 00:00:44.158 SPDK_TEST_DAOS=1 00:00:44.158 SPDK_RUN_ASAN=1 00:00:44.158 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:44.166 RUN_NIGHTLY=0 00:00:44.168 [Pipeline] } 00:00:44.184 [Pipeline] // stage 00:00:44.201 [Pipeline] stage 00:00:44.203 [Pipeline] { (Run VM) 00:00:44.219 [Pipeline] sh 00:00:44.499 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:44.499 + echo 'Start stage prepare_nvme.sh' 00:00:44.499 Start stage prepare_nvme.sh 00:00:44.499 + [[ -n 0 ]] 00:00:44.499 + disk_prefix=ex0 00:00:44.499 + [[ -n /var/jenkins/workspace/centos7-vg-autotest ]] 00:00:44.499 + [[ -e /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf ]] 00:00:44.499 + source /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf 00:00:44.499 ++ SPDK_TEST_UNITTEST=1 00:00:44.499 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.499 ++ SPDK_TEST_BLOCKDEV=1 00:00:44.499 ++ SPDK_TEST_DAOS=1 00:00:44.499 ++ SPDK_RUN_ASAN=1 00:00:44.499 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:44.499 ++ RUN_NIGHTLY=0 00:00:44.499 + cd /var/jenkins/workspace/centos7-vg-autotest 00:00:44.499 + nvme_files=() 00:00:44.499 + declare -A nvme_files 00:00:44.499 + backend_dir=/var/lib/libvirt/images/backends 00:00:44.499 + nvme_files['nvme.img']=5G 00:00:44.499 + nvme_files['nvme-cmb.img']=5G 00:00:44.499 + nvme_files['nvme-multi0.img']=4G 00:00:44.499 + nvme_files['nvme-multi1.img']=4G 00:00:44.499 + nvme_files['nvme-multi2.img']=4G 00:00:44.499 + nvme_files['nvme-openstack.img']=8G 00:00:44.499 + nvme_files['nvme-zns.img']=5G 00:00:44.499 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:44.499 + (( SPDK_TEST_FTL == 1 )) 00:00:44.499 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:44.499 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:44.499 + for nvme in "${!nvme_files[@]}" 00:00:44.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:44.499 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.499 + for nvme in "${!nvme_files[@]}" 00:00:44.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:44.499 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.499 + for nvme in "${!nvme_files[@]}" 00:00:44.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:44.499 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:44.499 + for nvme in "${!nvme_files[@]}" 00:00:44.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:44.499 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.499 + for nvme in "${!nvme_files[@]}" 00:00:44.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:44.499 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.499 + for nvme in "${!nvme_files[@]}" 00:00:44.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:44.499 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.499 + for nvme in "${!nvme_files[@]}" 00:00:44.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:45.078 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:45.078 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:45.078 + echo 'End stage prepare_nvme.sh' 00:00:45.078 End stage prepare_nvme.sh 00:00:45.088 [Pipeline] sh 00:00:45.365 + DISTRO=centos7 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:45.365 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -H -a -v -f centos7 00:00:45.365 00:00:45.365 DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant 00:00:45.366 SPDK_DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk 00:00:45.366 VAGRANT_TARGET=/var/jenkins/workspace/centos7-vg-autotest 00:00:45.366 HELP=0 00:00:45.366 DRY_RUN=0 00:00:45.366 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img, 00:00:45.366 NVME_DISKS_TYPE=nvme, 00:00:45.366 NVME_AUTO_CREATE=0 00:00:45.366 NVME_DISKS_NAMESPACES=, 00:00:45.366 NVME_CMB=, 00:00:45.366 NVME_PMR=, 00:00:45.366 NVME_ZNS=, 00:00:45.366 NVME_MS=, 00:00:45.366 NVME_FDP=, 00:00:45.366 SPDK_VAGRANT_DISTRO=centos7 00:00:45.366 SPDK_VAGRANT_VMCPU=10 00:00:45.366 SPDK_VAGRANT_VMRAM=12288 00:00:45.366 SPDK_VAGRANT_PROVIDER=libvirt 00:00:45.366 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:45.366 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:45.366 SPDK_OPENSTACK_NETWORK=0 00:00:45.366 VAGRANT_PACKAGE_BOX=0 00:00:45.366 VAGRANTFILE=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:45.366 FORCE_DISTRO=true 00:00:45.366 VAGRANT_BOX_VERSION= 00:00:45.366 EXTRA_VAGRANTFILES= 00:00:45.366 NIC_MODEL=e1000 00:00:45.366 00:00:45.366 mkdir: created directory '/var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt' 00:00:45.366 /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt /var/jenkins/workspace/centos7-vg-autotest 00:00:48.649 Bringing machine 'default' up with 'libvirt' provider... 00:00:49.584 ==> default: Creating image (snapshot of base box volume). 00:00:49.584 ==> default: Creating domain with the following settings... 00:00:49.584 ==> default: -- Name: centos7-7.8.2003-1711172311-2200_default_1715728572_4525222614efde3cec61 00:00:49.584 ==> default: -- Domain type: kvm 00:00:49.584 ==> default: -- Cpus: 10 00:00:49.584 ==> default: -- Feature: acpi 00:00:49.584 ==> default: -- Feature: apic 00:00:49.584 ==> default: -- Feature: pae 00:00:49.584 ==> default: -- Memory: 12288M 00:00:49.584 ==> default: -- Memory Backing: hugepages: 00:00:49.584 ==> default: -- Management MAC: 00:00:49.584 ==> default: -- Loader: 00:00:49.584 ==> default: -- Nvram: 00:00:49.584 ==> default: -- Base box: spdk/centos7 00:00:49.584 ==> default: -- Storage pool: default 00:00:49.584 ==> default: -- Image: /var/lib/libvirt/images/centos7-7.8.2003-1711172311-2200_default_1715728572_4525222614efde3cec61.img (20G) 00:00:49.584 ==> default: -- Volume Cache: default 00:00:49.584 ==> default: -- Kernel: 00:00:49.584 ==> default: -- Initrd: 00:00:49.584 ==> default: -- Graphics Type: vnc 00:00:49.584 ==> default: -- Graphics Port: -1 00:00:49.584 ==> default: -- Graphics IP: 127.0.0.1 00:00:49.584 ==> default: -- Graphics Password: Not defined 00:00:49.584 ==> default: -- Video Type: cirrus 00:00:49.584 ==> default: -- Video VRAM: 9216 00:00:49.584 ==> default: -- Sound Type: 00:00:49.584 ==> default: -- Keymap: en-us 00:00:49.584 ==> default: -- TPM Path: 00:00:49.584 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:49.584 ==> default: -- Command line args: 00:00:49.584 ==> default: -> value=-device, 00:00:49.584 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:49.584 ==> default: -> value=-drive, 00:00:49.584 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:49.584 ==> default: -> value=-device, 00:00:49.584 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.842 ==> default: Creating shared folders metadata... 00:00:49.842 ==> default: Starting domain. 00:00:51.215 ==> default: Waiting for domain to get an IP address... 00:01:03.450 ==> default: Waiting for SSH to become available... 00:01:04.016 ==> default: Configuring and enabling network interfaces... 00:01:07.300 default: SSH address: 192.168.121.106:22 00:01:07.300 default: SSH username: vagrant 00:01:07.300 default: SSH auth method: private key 00:01:08.675 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:18.642 ==> default: Mounting SSHFS shared folder... 00:01:19.209 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output => /home/vagrant/spdk_repo/output 00:01:19.209 ==> default: Checking Mount.. 00:01:19.774 ==> default: Folder Successfully Mounted! 00:01:19.774 ==> default: Running provisioner: file... 00:01:20.353 default: ~/.gitconfig => .gitconfig 00:01:20.611 00:01:20.611 SUCCESS! 00:01:20.611 00:01:20.611 cd to /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt and type "vagrant ssh" to use. 00:01:20.611 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:20.611 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt" to destroy all trace of vm. 00:01:20.611 00:01:20.624 [Pipeline] } 00:01:20.648 [Pipeline] // stage 00:01:20.661 [Pipeline] dir 00:01:20.661 Running in /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt 00:01:20.664 [Pipeline] { 00:01:20.684 [Pipeline] catchError 00:01:20.686 [Pipeline] { 00:01:20.700 [Pipeline] sh 00:01:20.978 + vagrant ssh-config --host vagrant 00:01:20.978 + sed -ne /^Host/,$p 00:01:20.978 + tee ssh_conf 00:01:25.162 Host vagrant 00:01:25.162 HostName 192.168.121.106 00:01:25.162 User vagrant 00:01:25.162 Port 22 00:01:25.162 UserKnownHostsFile /dev/null 00:01:25.162 StrictHostKeyChecking no 00:01:25.162 PasswordAuthentication no 00:01:25.162 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-centos7/7.8.2003-1711172311-2200/libvirt/centos7 00:01:25.162 IdentitiesOnly yes 00:01:25.162 LogLevel FATAL 00:01:25.162 ForwardAgent yes 00:01:25.162 ForwardX11 yes 00:01:25.162 00:01:25.176 [Pipeline] withEnv 00:01:25.178 [Pipeline] { 00:01:25.194 [Pipeline] sh 00:01:25.471 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:25.471 source /etc/os-release 00:01:25.471 [[ -e /image.version ]] && img=$(< /image.version) 00:01:25.471 # Minimal, systemd-like check. 00:01:25.471 if [[ -e /.dockerenv ]]; then 00:01:25.471 # Clear garbage from the node's name: 00:01:25.471 # agt-er_autotest_547-896 -> autotest_547-896 00:01:25.471 # $HOSTNAME is the actual container id 00:01:25.471 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:25.471 if mountpoint -q /etc/hostname; then 00:01:25.471 # We can assume this is a mount from a host where container is running, 00:01:25.471 # so fetch its hostname to easily identify the target swarm worker. 00:01:25.471 container="$(< /etc/hostname) ($agent)" 00:01:25.471 else 00:01:25.471 # Fallback 00:01:25.471 container=$agent 00:01:25.471 fi 00:01:25.471 fi 00:01:25.471 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:25.471 00:01:25.481 [Pipeline] } 00:01:25.500 [Pipeline] // withEnv 00:01:25.507 [Pipeline] setCustomBuildProperty 00:01:25.522 [Pipeline] stage 00:01:25.525 [Pipeline] { (Tests) 00:01:25.543 [Pipeline] sh 00:01:25.820 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:25.838 [Pipeline] timeout 00:01:25.838 Timeout set to expire in 1 hr 0 min 00:01:25.840 [Pipeline] { 00:01:25.858 [Pipeline] sh 00:01:26.133 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:26.698 HEAD is now at e8841656d nvmf: add nvmf_host_free() 00:01:26.711 [Pipeline] sh 00:01:26.990 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:26.990 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.006 [Pipeline] sh 00:01:27.283 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:27.299 [Pipeline] sh 00:01:27.575 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:27.575 ++ readlink -f spdk_repo 00:01:27.575 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:27.575 + [[ -n /home/vagrant/spdk_repo ]] 00:01:27.575 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:27.575 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:27.575 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:27.575 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:27.575 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:27.575 + cd /home/vagrant/spdk_repo 00:01:27.575 + source /etc/os-release 00:01:27.575 ++ NAME='CentOS Linux' 00:01:27.575 ++ VERSION='7 (Core)' 00:01:27.575 ++ ID=centos 00:01:27.575 ++ ID_LIKE='rhel fedora' 00:01:27.575 ++ VERSION_ID=7 00:01:27.575 ++ PRETTY_NAME='CentOS Linux 7 (Core)' 00:01:27.575 ++ ANSI_COLOR='0;31' 00:01:27.575 ++ CPE_NAME=cpe:/o:centos:centos:7 00:01:27.575 ++ HOME_URL=https://www.centos.org/ 00:01:27.575 ++ BUG_REPORT_URL=https://bugs.centos.org/ 00:01:27.575 ++ CENTOS_MANTISBT_PROJECT=CentOS-7 00:01:27.575 ++ CENTOS_MANTISBT_PROJECT_VERSION=7 00:01:27.575 ++ REDHAT_SUPPORT_PRODUCT=centos 00:01:27.575 ++ REDHAT_SUPPORT_PRODUCT_VERSION=7 00:01:27.575 + uname -a 00:01:27.575 Linux centos7-cloud-1711172311-2200 3.10.0-1160.114.2.el7.x86_64 #1 SMP Wed Mar 20 15:54:52 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:27.575 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:27.575 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.575 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:01:27.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:01:27.833 Hugepages 00:01:27.833 node hugesize free / total 00:01:27.833 node0 1048576kB 0 / 0 00:01:27.833 node0 2048kB 0 / 0 00:01:27.833 00:01:27.833 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.833 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:27.833 NVMe 0000:00:10.0 1b36 0010 0 nvme nvme0 nvme0n1 00:01:27.833 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:01:27.833 + rm -f /tmp/spdk-ld-path 00:01:27.833 + source autorun-spdk.conf 00:01:27.833 ++ SPDK_TEST_UNITTEST=1 00:01:27.833 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.833 ++ SPDK_TEST_BLOCKDEV=1 00:01:27.833 ++ SPDK_TEST_DAOS=1 00:01:27.833 ++ SPDK_RUN_ASAN=1 00:01:27.833 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.833 ++ RUN_NIGHTLY=0 00:01:27.833 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.833 + [[ -n '' ]] 00:01:27.833 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:27.833 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.833 + for M in /var/spdk/build-*-manifest.txt 00:01:27.833 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.833 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.833 + for M in /var/spdk/build-*-manifest.txt 00:01:27.833 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.833 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.833 ++ uname 00:01:27.833 + [[ Linux == \L\i\n\u\x ]] 00:01:27.833 + sudo dmesg -T 00:01:27.833 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.833 + sudo dmesg --clear 00:01:27.833 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.833 + dmesg_pid=2622 00:01:27.833 + [[ CentOS Linux == FreeBSD ]] 00:01:27.833 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.833 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.833 + sudo dmesg -Tw 00:01:27.833 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.833 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.833 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.833 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.833 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.833 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:27.833 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:27.834 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:27.834 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.834 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.834 Test configuration: 00:01:27.834 SPDK_TEST_UNITTEST=1 00:01:27.834 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.834 SPDK_TEST_BLOCKDEV=1 00:01:27.834 SPDK_TEST_DAOS=1 00:01:27.834 SPDK_RUN_ASAN=1 00:01:27.834 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.092 RUN_NIGHTLY=0 23:16:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:28.092 23:16:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.092 23:16:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.092 23:16:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.092 23:16:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:28.093 23:16:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:28.093 23:16:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:28.093 23:16:50 -- paths/export.sh@5 -- $ export PATH 00:01:28.093 23:16:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:28.093 23:16:50 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:28.093 23:16:50 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:28.093 23:16:50 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715728610.XXXXXX 00:01:28.093 23:16:50 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715728610.SiWpjX 00:01:28.093 23:16:50 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:28.093 23:16:50 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:28.093 23:16:50 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:28.093 23:16:50 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:28.093 23:16:50 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.093 23:16:50 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:28.093 23:16:50 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:28.093 23:16:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.093 23:16:50 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:01:28.093 23:16:50 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:28.093 23:16:50 -- pm/common@17 -- $ local monitor 00:01:28.093 23:16:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.093 23:16:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.093 23:16:50 -- pm/common@25 -- $ sleep 1 00:01:28.093 23:16:50 -- pm/common@21 -- $ date +%s 00:01:28.093 23:16:50 -- pm/common@21 -- $ date +%s 00:01:28.093 23:16:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715728610 00:01:28.093 23:16:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715728610 00:01:28.093 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715728610_collect-vmstat.pm.log 00:01:28.093 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715728610_collect-cpu-load.pm.log 00:01:29.026 23:16:51 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:29.026 23:16:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.026 23:16:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.026 23:16:51 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:29.026 23:16:51 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.026 Tue May 14 23:16:51 UTC 2024 00:01:29.026 23:16:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.026 v24.05-pre-639-ge8841656d 00:01:29.026 23:16:51 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:29.026 23:16:51 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:29.026 23:16:51 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:29.026 23:16:51 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:29.026 23:16:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.026 ************************************ 00:01:29.026 START TEST asan 00:01:29.026 ************************************ 00:01:29.026 using asan 00:01:29.026 ************************************ 00:01:29.026 END TEST asan 00:01:29.026 ************************************ 00:01:29.026 23:16:51 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:01:29.026 00:01:29.026 real 0m0.000s 00:01:29.026 user 0m0.000s 00:01:29.026 sys 0m0.000s 00:01:29.026 23:16:51 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:29.026 23:16:51 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.026 23:16:51 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:01:29.026 23:16:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.026 23:16:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.026 23:16:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.026 23:16:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.026 23:16:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.026 23:16:51 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:29.026 23:16:51 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:29.026 23:16:51 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:01:29.026 23:16:51 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:29.026 23:16:51 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:29.026 23:16:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.027 ************************************ 00:01:29.027 START TEST unittest_build 00:01:29.027 ************************************ 00:01:29.027 23:16:51 unittest_build -- common/autotest_common.sh@1121 -- $ _unittest_build 00:01:29.027 23:16:51 unittest_build -- common/autobuild_common.sh@404 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos --without-shared 00:01:29.284 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:29.284 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:29.284 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:01:29.542 Using 'verbs' RDMA provider 00:01:30.109 WARNING: ISA-L & DPDK crypto cannot be used as nasm ver must be 2.14 or newer. 00:01:30.109 Without ISA-L, there is no software support for crypto or compression, 00:01:30.109 so these features will be disabled. 00:01:30.365 Creating mk/config.mk...done. 00:01:30.365 Creating mk/cc.flags.mk...done. 00:01:30.365 Type 'make' to build. 00:01:30.365 23:16:52 unittest_build -- common/autobuild_common.sh@405 -- $ make -j10 00:01:30.623 make[1]: Nothing to be done for 'all'. 00:01:34.805 The Meson build system 00:01:34.805 Version: 0.61.5 00:01:34.805 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:34.805 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:34.805 Build type: native build 00:01:34.805 Program cat found: YES (/bin/cat) 00:01:34.805 Project name: DPDK 00:01:34.805 Project version: 23.11.0 00:01:34.805 C compiler for the host machine: cc (gcc 10.2.1 "cc (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)") 00:01:34.805 C linker for the host machine: cc ld.bfd 2.35-5 00:01:34.805 Host machine cpu family: x86_64 00:01:34.805 Host machine cpu: x86_64 00:01:34.805 Message: ## Building in Developer Mode ## 00:01:34.805 Program pkg-config found: YES (/bin/pkg-config) 00:01:34.805 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:34.805 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:34.805 Program python3 found: YES (/usr/bin/python3) 00:01:34.805 Program cat found: YES (/bin/cat) 00:01:34.805 Compiler for C supports arguments -march=native: YES 00:01:34.805 Checking for size of "void *" : 8 00:01:34.805 Checking for size of "void *" : 8 00:01:34.805 Library m found: YES 00:01:34.805 Library numa found: YES 00:01:34.805 Has header "numaif.h" : YES 00:01:34.805 Library fdt found: NO 00:01:34.805 Library execinfo found: NO 00:01:34.805 Has header "execinfo.h" : YES 00:01:34.805 Found pkg-config: /bin/pkg-config (0.27.1) 00:01:34.805 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:34.805 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:34.805 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:34.805 Run-time dependency openssl found: YES 1.0.2k 00:01:34.805 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:34.805 Library pcap found: NO 00:01:34.805 Compiler for C supports arguments -Wcast-qual: YES 00:01:34.805 Compiler for C supports arguments -Wdeprecated: YES 00:01:34.805 Compiler for C supports arguments -Wformat: YES 00:01:34.805 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:34.805 Compiler for C supports arguments -Wformat-security: NO 00:01:34.805 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.805 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:34.805 Compiler for C supports arguments -Wnested-externs: YES 00:01:34.805 Compiler for C supports arguments -Wold-style-definition: YES 00:01:34.805 Compiler for C supports arguments -Wpointer-arith: YES 00:01:34.805 Compiler for C supports arguments -Wsign-compare: YES 00:01:34.805 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:34.805 Compiler for C supports arguments -Wundef: YES 00:01:34.805 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.805 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:34.805 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:34.805 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.805 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:34.805 Program objdump found: YES (/bin/objdump) 00:01:34.805 Compiler for C supports arguments -mavx512f: YES 00:01:34.805 Checking if "AVX512 checking" compiles: YES 00:01:34.805 Fetching value of define "__SSE4_2__" : 1 00:01:34.805 Fetching value of define "__AES__" : 1 00:01:34.805 Fetching value of define "__AVX__" : 1 00:01:34.805 Fetching value of define "__AVX2__" : 1 00:01:34.805 Fetching value of define "__AVX512BW__" : 00:01:34.805 Fetching value of define "__AVX512CD__" : 00:01:34.805 Fetching value of define "__AVX512DQ__" : 00:01:34.805 Fetching value of define "__AVX512F__" : 00:01:34.805 Fetching value of define "__AVX512VL__" : 00:01:34.805 Fetching value of define "__PCLMUL__" : 1 00:01:34.805 Fetching value of define "__RDRND__" : 1 00:01:34.805 Fetching value of define "__RDSEED__" : 1 00:01:34.805 Fetching value of define "__VPCLMULQDQ__" : 00:01:34.805 Fetching value of define "__znver1__" : 00:01:34.805 Fetching value of define "__znver2__" : 00:01:34.805 Fetching value of define "__znver3__" : 00:01:34.805 Fetching value of define "__znver4__" : 00:01:34.805 Library asan found: YES 00:01:34.805 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:34.805 Message: lib/log: Defining dependency "log" 00:01:34.805 Message: lib/kvargs: Defining dependency "kvargs" 00:01:34.805 Message: lib/telemetry: Defining dependency "telemetry" 00:01:34.805 Library rt found: YES 00:01:34.805 Checking for function "getentropy" : NO 00:01:34.805 Message: lib/eal: Defining dependency "eal" 00:01:34.805 Message: lib/ring: Defining dependency "ring" 00:01:34.805 Message: lib/rcu: Defining dependency "rcu" 00:01:34.805 Message: lib/mempool: Defining dependency "mempool" 00:01:34.805 Message: lib/mbuf: Defining dependency "mbuf" 00:01:34.805 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:34.805 Fetching value of define "__AVX512F__" : (cached) 00:01:34.805 Compiler for C supports arguments -mpclmul: YES 00:01:34.805 Compiler for C supports arguments -maes: YES 00:01:36.181 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.181 Compiler for C supports arguments -mavx512bw: YES 00:01:36.181 Compiler for C supports arguments -mavx512dq: YES 00:01:36.181 Compiler for C supports arguments -mavx512vl: YES 00:01:36.181 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.181 Compiler for C supports arguments -mavx2: YES 00:01:36.181 Compiler for C supports arguments -mavx: YES 00:01:36.181 Message: lib/net: Defining dependency "net" 00:01:36.181 Message: lib/meter: Defining dependency "meter" 00:01:36.181 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.181 Message: lib/pci: Defining dependency "pci" 00:01:36.181 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.181 Message: lib/hash: Defining dependency "hash" 00:01:36.181 Message: lib/timer: Defining dependency "timer" 00:01:36.181 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.181 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.181 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.181 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.181 Message: lib/power: Defining dependency "power" 00:01:36.181 Message: lib/reorder: Defining dependency "reorder" 00:01:36.181 Message: lib/security: Defining dependency "security" 00:01:36.181 Has header "linux/userfaultfd.h" : YES 00:01:36.181 Has header "linux/vduse.h" : NO 00:01:36.181 Message: lib/vhost: Defining dependency "vhost" 00:01:36.181 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.181 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.181 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.181 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.181 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:36.181 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:36.181 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:36.181 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:36.181 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:36.181 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:36.181 Program doxygen found: YES (/bin/doxygen) 00:01:36.181 Configuring doxy-api-html.conf using configuration 00:01:36.181 Configuring doxy-api-man.conf using configuration 00:01:36.181 Program mandb found: YES (/bin/mandb) 00:01:36.181 Program sphinx-build found: NO 00:01:36.181 Configuring rte_build_config.h using configuration 00:01:36.181 Message: 00:01:36.181 ================= 00:01:36.181 Applications Enabled 00:01:36.181 ================= 00:01:36.181 00:01:36.181 apps: 00:01:36.181 00:01:36.181 00:01:36.181 Message: 00:01:36.181 ================= 00:01:36.181 Libraries Enabled 00:01:36.181 ================= 00:01:36.181 00:01:36.181 libs: 00:01:36.181 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:36.181 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:36.181 cryptodev, dmadev, power, reorder, security, vhost, 00:01:36.181 00:01:36.181 Message: 00:01:36.181 =============== 00:01:36.181 Drivers Enabled 00:01:36.181 =============== 00:01:36.181 00:01:36.181 common: 00:01:36.181 00:01:36.181 bus: 00:01:36.181 pci, vdev, 00:01:36.181 mempool: 00:01:36.181 ring, 00:01:36.181 dma: 00:01:36.181 00:01:36.181 net: 00:01:36.181 00:01:36.181 crypto: 00:01:36.181 00:01:36.181 compress: 00:01:36.181 00:01:36.181 vdpa: 00:01:36.181 00:01:36.181 00:01:36.181 Message: 00:01:36.181 ================= 00:01:36.181 Content Skipped 00:01:36.181 ================= 00:01:36.181 00:01:36.181 apps: 00:01:36.181 dumpcap: explicitly disabled via build config 00:01:36.181 graph: explicitly disabled via build config 00:01:36.181 pdump: explicitly disabled via build config 00:01:36.181 proc-info: explicitly disabled via build config 00:01:36.181 test-acl: explicitly disabled via build config 00:01:36.181 test-bbdev: explicitly disabled via build config 00:01:36.181 test-cmdline: explicitly disabled via build config 00:01:36.181 test-compress-perf: explicitly disabled via build config 00:01:36.181 test-crypto-perf: explicitly disabled via build config 00:01:36.181 test-dma-perf: explicitly disabled via build config 00:01:36.181 test-eventdev: explicitly disabled via build config 00:01:36.181 test-fib: explicitly disabled via build config 00:01:36.181 test-flow-perf: explicitly disabled via build config 00:01:36.181 test-gpudev: explicitly disabled via build config 00:01:36.181 test-mldev: explicitly disabled via build config 00:01:36.181 test-pipeline: explicitly disabled via build config 00:01:36.181 test-pmd: explicitly disabled via build config 00:01:36.181 test-regex: explicitly disabled via build config 00:01:36.181 test-sad: explicitly disabled via build config 00:01:36.181 test-security-perf: explicitly disabled via build config 00:01:36.181 00:01:36.181 libs: 00:01:36.181 metrics: explicitly disabled via build config 00:01:36.181 acl: explicitly disabled via build config 00:01:36.181 bbdev: explicitly disabled via build config 00:01:36.181 bitratestats: explicitly disabled via build config 00:01:36.181 bpf: explicitly disabled via build config 00:01:36.181 cfgfile: explicitly disabled via build config 00:01:36.181 distributor: explicitly disabled via build config 00:01:36.181 efd: explicitly disabled via build config 00:01:36.181 eventdev: explicitly disabled via build config 00:01:36.181 dispatcher: explicitly disabled via build config 00:01:36.181 gpudev: explicitly disabled via build config 00:01:36.181 gro: explicitly disabled via build config 00:01:36.181 gso: explicitly disabled via build config 00:01:36.181 ip_frag: explicitly disabled via build config 00:01:36.181 jobstats: explicitly disabled via build config 00:01:36.181 latencystats: explicitly disabled via build config 00:01:36.181 lpm: explicitly disabled via build config 00:01:36.181 member: explicitly disabled via build config 00:01:36.181 pcapng: explicitly disabled via build config 00:01:36.181 rawdev: explicitly disabled via build config 00:01:36.181 regexdev: explicitly disabled via build config 00:01:36.181 mldev: explicitly disabled via build config 00:01:36.181 rib: explicitly disabled via build config 00:01:36.181 sched: explicitly disabled via build config 00:01:36.181 stack: explicitly disabled via build config 00:01:36.181 ipsec: explicitly disabled via build config 00:01:36.181 pdcp: explicitly disabled via build config 00:01:36.181 fib: explicitly disabled via build config 00:01:36.181 port: explicitly disabled via build config 00:01:36.181 pdump: explicitly disabled via build config 00:01:36.181 table: explicitly disabled via build config 00:01:36.181 pipeline: explicitly disabled via build config 00:01:36.181 graph: explicitly disabled via build config 00:01:36.181 node: explicitly disabled via build config 00:01:36.181 00:01:36.181 drivers: 00:01:36.181 common/cpt: not in enabled drivers build config 00:01:36.181 common/dpaax: not in enabled drivers build config 00:01:36.181 common/iavf: not in enabled drivers build config 00:01:36.181 common/idpf: not in enabled drivers build config 00:01:36.181 common/mvep: not in enabled drivers build config 00:01:36.181 common/octeontx: not in enabled drivers build config 00:01:36.181 bus/auxiliary: not in enabled drivers build config 00:01:36.181 bus/cdx: not in enabled drivers build config 00:01:36.181 bus/dpaa: not in enabled drivers build config 00:01:36.181 bus/fslmc: not in enabled drivers build config 00:01:36.181 bus/ifpga: not in enabled drivers build config 00:01:36.181 bus/platform: not in enabled drivers build config 00:01:36.181 bus/vmbus: not in enabled drivers build config 00:01:36.181 common/cnxk: not in enabled drivers build config 00:01:36.181 common/mlx5: not in enabled drivers build config 00:01:36.181 common/nfp: not in enabled drivers build config 00:01:36.181 common/qat: not in enabled drivers build config 00:01:36.181 common/sfc_efx: not in enabled drivers build config 00:01:36.181 mempool/bucket: not in enabled drivers build config 00:01:36.181 mempool/cnxk: not in enabled drivers build config 00:01:36.181 mempool/dpaa: not in enabled drivers build config 00:01:36.181 mempool/dpaa2: not in enabled drivers build config 00:01:36.181 mempool/octeontx: not in enabled drivers build config 00:01:36.181 mempool/stack: not in enabled drivers build config 00:01:36.181 dma/cnxk: not in enabled drivers build config 00:01:36.181 dma/dpaa: not in enabled drivers build config 00:01:36.181 dma/dpaa2: not in enabled drivers build config 00:01:36.181 dma/hisilicon: not in enabled drivers build config 00:01:36.181 dma/idxd: not in enabled drivers build config 00:01:36.181 dma/ioat: not in enabled drivers build config 00:01:36.181 dma/skeleton: not in enabled drivers build config 00:01:36.181 net/af_packet: not in enabled drivers build config 00:01:36.181 net/af_xdp: not in enabled drivers build config 00:01:36.181 net/ark: not in enabled drivers build config 00:01:36.181 net/atlantic: not in enabled drivers build config 00:01:36.181 net/avp: not in enabled drivers build config 00:01:36.181 net/axgbe: not in enabled drivers build config 00:01:36.181 net/bnx2x: not in enabled drivers build config 00:01:36.181 net/bnxt: not in enabled drivers build config 00:01:36.181 net/bonding: not in enabled drivers build config 00:01:36.182 net/cnxk: not in enabled drivers build config 00:01:36.182 net/cpfl: not in enabled drivers build config 00:01:36.182 net/cxgbe: not in enabled drivers build config 00:01:36.182 net/dpaa: not in enabled drivers build config 00:01:36.182 net/dpaa2: not in enabled drivers build config 00:01:36.182 net/e1000: not in enabled drivers build config 00:01:36.182 net/ena: not in enabled drivers build config 00:01:36.182 net/enetc: not in enabled drivers build config 00:01:36.182 net/enetfec: not in enabled drivers build config 00:01:36.182 net/enic: not in enabled drivers build config 00:01:36.182 net/failsafe: not in enabled drivers build config 00:01:36.182 net/fm10k: not in enabled drivers build config 00:01:36.182 net/gve: not in enabled drivers build config 00:01:36.182 net/hinic: not in enabled drivers build config 00:01:36.182 net/hns3: not in enabled drivers build config 00:01:36.182 net/i40e: not in enabled drivers build config 00:01:36.182 net/iavf: not in enabled drivers build config 00:01:36.182 net/ice: not in enabled drivers build config 00:01:36.182 net/idpf: not in enabled drivers build config 00:01:36.182 net/igc: not in enabled drivers build config 00:01:36.182 net/ionic: not in enabled drivers build config 00:01:36.182 net/ipn3ke: not in enabled drivers build config 00:01:36.182 net/ixgbe: not in enabled drivers build config 00:01:36.182 net/mana: not in enabled drivers build config 00:01:36.182 net/memif: not in enabled drivers build config 00:01:36.182 net/mlx4: not in enabled drivers build config 00:01:36.182 net/mlx5: not in enabled drivers build config 00:01:36.182 net/mvneta: not in enabled drivers build config 00:01:36.182 net/mvpp2: not in enabled drivers build config 00:01:36.182 net/netvsc: not in enabled drivers build config 00:01:36.182 net/nfb: not in enabled drivers build config 00:01:36.182 net/nfp: not in enabled drivers build config 00:01:36.182 net/ngbe: not in enabled drivers build config 00:01:36.182 net/null: not in enabled drivers build config 00:01:36.182 net/octeontx: not in enabled drivers build config 00:01:36.182 net/octeon_ep: not in enabled drivers build config 00:01:36.182 net/pcap: not in enabled drivers build config 00:01:36.182 net/pfe: not in enabled drivers build config 00:01:36.182 net/qede: not in enabled drivers build config 00:01:36.182 net/ring: not in enabled drivers build config 00:01:36.182 net/sfc: not in enabled drivers build config 00:01:36.182 net/softnic: not in enabled drivers build config 00:01:36.182 net/tap: not in enabled drivers build config 00:01:36.182 net/thunderx: not in enabled drivers build config 00:01:36.182 net/txgbe: not in enabled drivers build config 00:01:36.182 net/vdev_netvsc: not in enabled drivers build config 00:01:36.182 net/vhost: not in enabled drivers build config 00:01:36.182 net/virtio: not in enabled drivers build config 00:01:36.182 net/vmxnet3: not in enabled drivers build config 00:01:36.182 raw/*: missing internal dependency, "rawdev" 00:01:36.182 crypto/armv8: not in enabled drivers build config 00:01:36.182 crypto/bcmfs: not in enabled drivers build config 00:01:36.182 crypto/caam_jr: not in enabled drivers build config 00:01:36.182 crypto/ccp: not in enabled drivers build config 00:01:36.182 crypto/cnxk: not in enabled drivers build config 00:01:36.182 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.182 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.182 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.182 crypto/mlx5: not in enabled drivers build config 00:01:36.182 crypto/mvsam: not in enabled drivers build config 00:01:36.182 crypto/nitrox: not in enabled drivers build config 00:01:36.182 crypto/null: not in enabled drivers build config 00:01:36.182 crypto/octeontx: not in enabled drivers build config 00:01:36.182 crypto/openssl: not in enabled drivers build config 00:01:36.182 crypto/scheduler: not in enabled drivers build config 00:01:36.182 crypto/uadk: not in enabled drivers build config 00:01:36.182 crypto/virtio: not in enabled drivers build config 00:01:36.182 compress/isal: not in enabled drivers build config 00:01:36.182 compress/mlx5: not in enabled drivers build config 00:01:36.182 compress/octeontx: not in enabled drivers build config 00:01:36.182 compress/zlib: not in enabled drivers build config 00:01:36.182 regex/*: missing internal dependency, "regexdev" 00:01:36.182 ml/*: missing internal dependency, "mldev" 00:01:36.182 vdpa/ifc: not in enabled drivers build config 00:01:36.182 vdpa/mlx5: not in enabled drivers build config 00:01:36.182 vdpa/nfp: not in enabled drivers build config 00:01:36.182 vdpa/sfc: not in enabled drivers build config 00:01:36.182 event/*: missing internal dependency, "eventdev" 00:01:36.182 baseband/*: missing internal dependency, "bbdev" 00:01:36.182 gpu/*: missing internal dependency, "gpudev" 00:01:36.182 00:01:36.182 00:01:36.748 Build targets in project: 85 00:01:36.748 00:01:36.748 DPDK 23.11.0 00:01:36.748 00:01:36.748 User defined options 00:01:36.748 buildtype : debug 00:01:36.748 default_library : static 00:01:36.748 libdir : lib 00:01:36.748 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:36.748 b_sanitize : address 00:01:36.748 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:01:36.748 c_link_args : 00:01:36.748 cpu_instruction_set: native 00:01:36.748 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:36.748 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:36.748 enable_docs : false 00:01:36.748 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:36.748 enable_kmods : false 00:01:36.748 tests : false 00:01:36.748 00:01:36.748 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.749 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:01:37.681 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:37.681 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.681 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.681 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.681 [4/264] Linking static target lib/librte_kvargs.a 00:01:37.681 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.681 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.681 [7/264] Linking static target lib/librte_log.a 00:01:37.681 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.681 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.681 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.939 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.939 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.939 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.939 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.939 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.939 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.939 [17/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.939 [18/264] Linking static target lib/librte_telemetry.a 00:01:38.197 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.197 [20/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.197 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.197 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.197 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.197 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.455 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.455 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.455 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.455 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.455 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.455 [30/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.455 [31/264] Linking target lib/librte_log.so.24.0 00:01:38.455 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.455 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.714 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.714 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.714 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.714 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.714 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.714 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.714 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.714 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.714 [42/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.971 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.971 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.971 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.971 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.971 [47/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.971 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.229 [49/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:39.229 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.229 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.229 [52/264] Linking target lib/librte_kvargs.so.24.0 00:01:39.229 [53/264] Linking target lib/librte_telemetry.so.24.0 00:01:39.229 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.229 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.229 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:39.229 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.229 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:39.229 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:39.229 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.486 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.486 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.486 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:39.486 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.486 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.745 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.745 [67/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:39.745 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.745 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.745 [70/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:39.745 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.745 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.745 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.745 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.745 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.745 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.745 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:40.004 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.004 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.004 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.004 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.262 [82/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.262 [83/264] Linking static target lib/librte_ring.a 00:01:40.262 [84/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.262 [85/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.262 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.262 [87/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.262 [88/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.521 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.521 [90/264] Linking static target lib/librte_mempool.a 00:01:40.521 [91/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.521 [92/264] Linking static target lib/librte_eal.a 00:01:40.521 [93/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.521 [94/264] Linking static target lib/librte_rcu.a 00:01:40.521 [95/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.778 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.778 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.778 [98/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.778 [99/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:41.036 [100/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:41.036 [101/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:41.036 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:41.036 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:41.036 [104/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.293 [105/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:41.293 [106/264] Linking static target lib/librte_mbuf.a 00:01:41.293 [107/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:41.293 [108/264] Linking static target lib/librte_net.a 00:01:41.293 [109/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.293 [110/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:41.293 [111/264] Linking static target lib/librte_meter.a 00:01:41.550 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:41.550 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:41.808 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:41.808 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:41.808 [116/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.808 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.067 [118/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.343 [119/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.343 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:42.604 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:42.604 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:42.604 [123/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.604 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.604 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:42.604 [126/264] Linking static target lib/librte_pci.a 00:01:42.604 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:42.604 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:42.861 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:42.861 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:42.861 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:42.861 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:42.861 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:42.861 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:42.861 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:43.119 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:43.119 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:43.119 [138/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:43.119 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:43.119 [140/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:43.119 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:43.119 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:43.119 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:43.119 [144/264] Linking static target lib/librte_cmdline.a 00:01:43.376 [145/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.376 [146/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:43.634 [147/264] Linking static target lib/librte_timer.a 00:01:43.634 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:43.634 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:43.634 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:43.893 [151/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:43.893 [152/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:43.893 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:44.152 [154/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:44.152 [155/264] Linking static target lib/librte_compressdev.a 00:01:44.152 [156/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:44.152 [157/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:44.152 [158/264] Linking static target lib/librte_hash.a 00:01:44.152 [159/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.152 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:44.410 [161/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.410 [162/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.410 [163/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:44.410 [164/264] Linking static target lib/librte_dmadev.a 00:01:44.410 [165/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.410 [166/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.669 [167/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.669 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:44.669 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.927 [170/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.927 [171/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:44.927 [172/264] Linking static target lib/librte_ethdev.a 00:01:44.927 [173/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:44.927 [174/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.927 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:44.927 [176/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:45.185 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.185 [178/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.185 [179/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.185 [180/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.185 [181/264] Linking static target lib/librte_cryptodev.a 00:01:45.185 [182/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:45.443 [183/264] Linking static target lib/librte_power.a 00:01:45.443 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:45.443 [185/264] Linking static target lib/librte_reorder.a 00:01:45.701 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:45.701 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.701 [188/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.701 [189/264] Linking static target lib/librte_security.a 00:01:45.701 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.325 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.325 [192/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.325 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.608 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.608 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.608 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.608 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.866 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.866 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.866 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.866 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.124 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.124 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.124 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.381 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.381 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.381 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.798 [208/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.798 [209/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.798 [210/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.798 [211/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.798 [212/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.798 [213/264] Linking static target drivers/librte_bus_pci.a 00:01:47.798 [214/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.798 [215/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.798 [216/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.798 [217/264] Linking static target drivers/librte_bus_vdev.a 00:01:47.798 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.798 [219/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.798 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.798 [221/264] Linking static target drivers/librte_mempool_ring.a 00:01:48.429 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.429 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.686 [224/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.686 [225/264] Linking target lib/librte_eal.so.24.0 00:01:49.251 [226/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:49.251 [227/264] Linking target lib/librte_meter.so.24.0 00:01:49.251 [228/264] Linking target lib/librte_pci.so.24.0 00:01:49.251 [229/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:49.251 [230/264] Linking target lib/librte_timer.so.24.0 00:01:49.251 [231/264] Linking target lib/librte_ring.so.24.0 00:01:49.252 [232/264] Linking target lib/librte_dmadev.so.24.0 00:01:49.817 [233/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:49.817 [234/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:49.817 [235/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:49.817 [236/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.817 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:49.817 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:49.817 [239/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:49.817 [240/264] Linking target lib/librte_rcu.so.24.0 00:01:49.817 [241/264] Linking target lib/librte_mempool.so.24.0 00:01:50.382 [242/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.382 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:50.382 [244/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:50.382 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:50.382 [246/264] Linking target lib/librte_mbuf.so.24.0 00:01:50.947 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:50.947 [248/264] Linking target lib/librte_net.so.24.0 00:01:50.947 [249/264] Linking target lib/librte_reorder.so.24.0 00:01:50.947 [250/264] Linking target lib/librte_compressdev.so.24.0 00:01:50.947 [251/264] Linking target lib/librte_cryptodev.so.24.0 00:01:51.512 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:51.512 [253/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:51.512 [254/264] Linking target lib/librte_hash.so.24.0 00:01:51.512 [255/264] Linking target lib/librte_security.so.24.0 00:01:51.512 [256/264] Linking target lib/librte_cmdline.so.24.0 00:01:51.512 [257/264] Linking target lib/librte_ethdev.so.24.0 00:01:52.078 [258/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:52.078 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:52.436 [260/264] Linking target lib/librte_power.so.24.0 00:01:53.812 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:53.812 [262/264] Linking static target lib/librte_vhost.a 00:01:55.712 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.712 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:55.712 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:01:57.089 CC lib/log/log.o 00:01:57.089 CC lib/ut_mock/mock.o 00:01:57.089 CC lib/log/log_flags.o 00:01:57.089 CC lib/ut/ut.o 00:01:57.089 CC lib/log/log_deprecated.o 00:01:57.347 LIB libspdk_ut_mock.a 00:01:57.347 LIB libspdk_log.a 00:01:57.347 LIB libspdk_ut.a 00:01:57.347 CC lib/dma/dma.o 00:01:57.347 CXX lib/trace_parser/trace.o 00:01:57.347 CC lib/ioat/ioat.o 00:01:57.347 CC lib/util/base64.o 00:01:57.347 CC lib/util/bit_array.o 00:01:57.605 CC lib/util/cpuset.o 00:01:57.605 CC lib/util/crc16.o 00:01:57.605 CC lib/util/crc32.o 00:01:57.605 CC lib/util/crc32c.o 00:01:57.605 CC lib/vfio_user/host/vfio_user_pci.o 00:01:57.605 LIB libspdk_dma.a 00:01:57.605 CC lib/util/crc32_ieee.o 00:01:57.605 CC lib/util/crc64.o 00:01:57.605 CC lib/util/dif.o 00:01:57.605 CC lib/util/fd.o 00:01:57.863 CC lib/util/file.o 00:01:57.863 CC lib/util/hexlify.o 00:01:57.863 CC lib/vfio_user/host/vfio_user.o 00:01:57.863 LIB libspdk_ioat.a 00:01:57.863 CC lib/util/iov.o 00:01:57.863 CC lib/util/math.o 00:01:57.863 CC lib/util/pipe.o 00:01:57.863 CC lib/util/strerror_tls.o 00:01:57.863 CC lib/util/string.o 00:01:57.863 LIB libspdk_vfio_user.a 00:01:58.122 CC lib/util/uuid.o 00:01:58.122 CC lib/util/fd_group.o 00:01:58.122 CC lib/util/xor.o 00:01:58.122 CC lib/util/zipf.o 00:01:58.380 LIB libspdk_trace_parser.a 00:01:58.381 LIB libspdk_util.a 00:01:58.638 CC lib/env_dpdk/env.o 00:01:58.638 CC lib/rdma/common.o 00:01:58.638 CC lib/conf/conf.o 00:01:58.638 CC lib/env_dpdk/memory.o 00:01:58.638 CC lib/idxd/idxd.o 00:01:58.638 CC lib/json/json_parse.o 00:01:58.638 CC lib/env_dpdk/pci.o 00:01:58.638 CC lib/rdma/rdma_verbs.o 00:01:58.638 CC lib/vmd/vmd.o 00:01:58.638 CC lib/idxd/idxd_user.o 00:01:58.638 LIB libspdk_conf.a 00:01:58.899 CC lib/env_dpdk/init.o 00:01:58.899 CC lib/env_dpdk/threads.o 00:01:58.899 CC lib/json/json_util.o 00:01:58.899 CC lib/json/json_write.o 00:01:58.899 LIB libspdk_rdma.a 00:01:58.899 CC lib/env_dpdk/pci_ioat.o 00:01:58.899 CC lib/env_dpdk/pci_virtio.o 00:01:59.158 CC lib/env_dpdk/pci_vmd.o 00:01:59.158 CC lib/vmd/led.o 00:01:59.158 CC lib/env_dpdk/pci_idxd.o 00:01:59.158 CC lib/env_dpdk/pci_event.o 00:01:59.158 CC lib/env_dpdk/sigbus_handler.o 00:01:59.158 LIB libspdk_idxd.a 00:01:59.158 CC lib/env_dpdk/pci_dpdk.o 00:01:59.158 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:59.158 LIB libspdk_vmd.a 00:01:59.158 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:59.158 LIB libspdk_json.a 00:01:59.416 CC lib/jsonrpc/jsonrpc_server.o 00:01:59.416 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:59.416 CC lib/jsonrpc/jsonrpc_client.o 00:01:59.416 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:59.674 LIB libspdk_jsonrpc.a 00:01:59.674 LIB libspdk_env_dpdk.a 00:01:59.932 CC lib/rpc/rpc.o 00:02:00.191 LIB libspdk_rpc.a 00:02:00.191 CC lib/keyring/keyring.o 00:02:00.191 CC lib/notify/notify.o 00:02:00.191 CC lib/trace/trace.o 00:02:00.191 CC lib/keyring/keyring_rpc.o 00:02:00.191 CC lib/trace/trace_flags.o 00:02:00.191 CC lib/notify/notify_rpc.o 00:02:00.191 CC lib/trace/trace_rpc.o 00:02:00.450 LIB libspdk_notify.a 00:02:00.450 LIB libspdk_keyring.a 00:02:00.450 LIB libspdk_trace.a 00:02:00.707 CC lib/thread/thread.o 00:02:00.708 CC lib/sock/sock.o 00:02:00.708 CC lib/thread/iobuf.o 00:02:00.708 CC lib/sock/sock_rpc.o 00:02:00.966 LIB libspdk_sock.a 00:02:01.224 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:01.224 CC lib/nvme/nvme_ctrlr.o 00:02:01.224 CC lib/nvme/nvme_fabric.o 00:02:01.224 CC lib/nvme/nvme_ns_cmd.o 00:02:01.224 CC lib/nvme/nvme_ns.o 00:02:01.224 CC lib/nvme/nvme_pcie_common.o 00:02:01.224 CC lib/nvme/nvme_pcie.o 00:02:01.224 CC lib/nvme/nvme_qpair.o 00:02:01.224 CC lib/nvme/nvme.o 00:02:01.791 LIB libspdk_thread.a 00:02:01.791 CC lib/nvme/nvme_quirks.o 00:02:01.791 CC lib/accel/accel.o 00:02:01.791 CC lib/nvme/nvme_transport.o 00:02:01.791 CC lib/nvme/nvme_discovery.o 00:02:01.791 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:01.791 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:02.049 CC lib/nvme/nvme_tcp.o 00:02:02.050 CC lib/nvme/nvme_opal.o 00:02:02.050 CC lib/nvme/nvme_io_msg.o 00:02:02.308 CC lib/nvme/nvme_poll_group.o 00:02:02.601 CC lib/blob/blobstore.o 00:02:02.601 CC lib/nvme/nvme_zns.o 00:02:02.601 CC lib/nvme/nvme_stubs.o 00:02:02.601 CC lib/accel/accel_rpc.o 00:02:02.601 CC lib/nvme/nvme_auth.o 00:02:02.601 CC lib/init/json_config.o 00:02:02.601 CC lib/init/subsystem.o 00:02:02.860 CC lib/nvme/nvme_cuse.o 00:02:02.860 CC lib/nvme/nvme_rdma.o 00:02:02.860 CC lib/accel/accel_sw.o 00:02:02.860 CC lib/init/subsystem_rpc.o 00:02:02.860 CC lib/init/rpc.o 00:02:02.860 CC lib/blob/request.o 00:02:03.119 CC lib/blob/zeroes.o 00:02:03.119 LIB libspdk_accel.a 00:02:03.119 LIB libspdk_init.a 00:02:03.119 CC lib/blob/blob_bs_dev.o 00:02:03.119 CC lib/virtio/virtio.o 00:02:03.119 CC lib/virtio/virtio_vhost_user.o 00:02:03.119 CC lib/virtio/virtio_vfio_user.o 00:02:03.119 CC lib/bdev/bdev.o 00:02:03.119 CC lib/bdev/bdev_rpc.o 00:02:03.377 CC lib/event/app.o 00:02:03.377 CC lib/bdev/bdev_zone.o 00:02:03.377 CC lib/virtio/virtio_pci.o 00:02:03.377 CC lib/event/reactor.o 00:02:03.377 CC lib/bdev/part.o 00:02:03.377 CC lib/bdev/scsi_nvme.o 00:02:03.377 CC lib/event/log_rpc.o 00:02:03.377 CC lib/event/app_rpc.o 00:02:03.635 LIB libspdk_virtio.a 00:02:03.635 CC lib/event/scheduler_static.o 00:02:03.635 LIB libspdk_nvme.a 00:02:03.894 LIB libspdk_event.a 00:02:04.460 LIB libspdk_blob.a 00:02:04.460 CC lib/blobfs/blobfs.o 00:02:04.460 CC lib/lvol/lvol.o 00:02:04.460 CC lib/blobfs/tree.o 00:02:05.024 LIB libspdk_bdev.a 00:02:05.024 CC lib/ftl/ftl_core.o 00:02:05.024 CC lib/nbd/nbd.o 00:02:05.024 CC lib/scsi/dev.o 00:02:05.024 CC lib/nbd/nbd_rpc.o 00:02:05.024 CC lib/ftl/ftl_init.o 00:02:05.024 CC lib/nvmf/ctrlr.o 00:02:05.024 CC lib/scsi/lun.o 00:02:05.024 CC lib/ftl/ftl_layout.o 00:02:05.024 LIB libspdk_lvol.a 00:02:05.282 CC lib/ftl/ftl_debug.o 00:02:05.282 LIB libspdk_blobfs.a 00:02:05.282 CC lib/ftl/ftl_io.o 00:02:05.282 CC lib/ftl/ftl_sb.o 00:02:05.282 CC lib/scsi/port.o 00:02:05.282 CC lib/scsi/scsi.o 00:02:05.282 CC lib/ftl/ftl_l2p.o 00:02:05.282 CC lib/ftl/ftl_l2p_flat.o 00:02:05.539 CC lib/ftl/ftl_nv_cache.o 00:02:05.539 CC lib/ftl/ftl_band.o 00:02:05.539 CC lib/scsi/scsi_bdev.o 00:02:05.539 CC lib/ftl/ftl_band_ops.o 00:02:05.539 CC lib/ftl/ftl_writer.o 00:02:05.539 LIB libspdk_nbd.a 00:02:05.539 CC lib/ftl/ftl_rq.o 00:02:05.539 CC lib/ftl/ftl_reloc.o 00:02:05.539 CC lib/scsi/scsi_pr.o 00:02:05.539 CC lib/ftl/ftl_l2p_cache.o 00:02:05.862 CC lib/ftl/ftl_p2l.o 00:02:05.862 CC lib/ftl/mngt/ftl_mngt.o 00:02:05.862 CC lib/scsi/scsi_rpc.o 00:02:05.862 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:05.862 CC lib/scsi/task.o 00:02:06.121 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:06.121 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:06.121 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:06.121 CC lib/nvmf/ctrlr_discovery.o 00:02:06.121 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:06.121 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:06.121 LIB libspdk_scsi.a 00:02:06.121 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:06.121 CC lib/nvmf/ctrlr_bdev.o 00:02:06.121 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:06.379 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:06.379 CC lib/iscsi/conn.o 00:02:06.379 CC lib/nvmf/subsystem.o 00:02:06.379 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:06.379 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:06.379 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:06.379 CC lib/ftl/utils/ftl_conf.o 00:02:06.379 CC lib/nvmf/nvmf.o 00:02:06.637 CC lib/ftl/utils/ftl_md.o 00:02:06.637 CC lib/ftl/utils/ftl_mempool.o 00:02:06.637 CC lib/ftl/utils/ftl_bitmap.o 00:02:06.637 CC lib/nvmf/nvmf_rpc.o 00:02:06.637 CC lib/ftl/utils/ftl_property.o 00:02:06.637 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:06.637 CC lib/iscsi/init_grp.o 00:02:06.637 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:06.896 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:06.896 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:06.896 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:06.896 CC lib/nvmf/transport.o 00:02:06.896 CC lib/iscsi/iscsi.o 00:02:06.896 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:07.154 CC lib/vhost/vhost.o 00:02:07.154 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:07.154 CC lib/nvmf/tcp.o 00:02:07.154 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:07.154 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:07.154 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:07.154 CC lib/ftl/base/ftl_base_dev.o 00:02:07.412 CC lib/nvmf/stubs.o 00:02:07.412 CC lib/nvmf/rdma.o 00:02:07.412 CC lib/ftl/base/ftl_base_bdev.o 00:02:07.412 CC lib/ftl/ftl_trace.o 00:02:07.412 CC lib/iscsi/md5.o 00:02:07.412 CC lib/vhost/vhost_rpc.o 00:02:07.412 CC lib/vhost/vhost_scsi.o 00:02:07.671 CC lib/vhost/vhost_blk.o 00:02:07.671 CC lib/iscsi/param.o 00:02:07.671 LIB libspdk_ftl.a 00:02:07.671 CC lib/iscsi/portal_grp.o 00:02:07.671 CC lib/vhost/rte_vhost_user.o 00:02:07.929 CC lib/iscsi/tgt_node.o 00:02:07.929 CC lib/iscsi/iscsi_subsystem.o 00:02:07.929 CC lib/iscsi/iscsi_rpc.o 00:02:07.929 CC lib/iscsi/task.o 00:02:08.495 LIB libspdk_iscsi.a 00:02:08.495 LIB libspdk_nvmf.a 00:02:08.753 LIB libspdk_vhost.a 00:02:09.012 CC module/env_dpdk/env_dpdk_rpc.o 00:02:09.012 CC module/blob/bdev/blob_bdev.o 00:02:09.012 CC module/keyring/file/keyring.o 00:02:09.012 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:09.012 CC module/sock/posix/posix.o 00:02:09.012 CC module/accel/dsa/accel_dsa.o 00:02:09.012 CC module/accel/iaa/accel_iaa.o 00:02:09.012 CC module/accel/ioat/accel_ioat.o 00:02:09.012 CC module/accel/error/accel_error.o 00:02:09.012 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:09.270 LIB libspdk_env_dpdk_rpc.a 00:02:09.270 CC module/accel/error/accel_error_rpc.o 00:02:09.270 LIB libspdk_scheduler_dynamic.a 00:02:09.270 CC module/keyring/file/keyring_rpc.o 00:02:09.270 CC module/accel/dsa/accel_dsa_rpc.o 00:02:09.270 CC module/accel/ioat/accel_ioat_rpc.o 00:02:09.270 CC module/accel/iaa/accel_iaa_rpc.o 00:02:09.270 LIB libspdk_scheduler_dpdk_governor.a 00:02:09.270 LIB libspdk_blob_bdev.a 00:02:09.270 LIB libspdk_accel_error.a 00:02:09.529 LIB libspdk_keyring_file.a 00:02:09.529 LIB libspdk_accel_ioat.a 00:02:09.529 CC module/scheduler/gscheduler/gscheduler.o 00:02:09.529 LIB libspdk_accel_dsa.a 00:02:09.529 LIB libspdk_accel_iaa.a 00:02:09.529 CC module/blobfs/bdev/blobfs_bdev.o 00:02:09.529 LIB libspdk_scheduler_gscheduler.a 00:02:09.529 CC module/bdev/delay/vbdev_delay.o 00:02:09.529 CC module/bdev/gpt/gpt.o 00:02:09.529 CC module/bdev/error/vbdev_error.o 00:02:09.529 CC module/bdev/lvol/vbdev_lvol.o 00:02:09.529 CC module/bdev/malloc/bdev_malloc.o 00:02:09.529 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:09.529 CC module/bdev/null/bdev_null.o 00:02:09.802 CC module/bdev/nvme/bdev_nvme.o 00:02:09.802 LIB libspdk_sock_posix.a 00:02:09.802 CC module/bdev/null/bdev_null_rpc.o 00:02:09.802 CC module/bdev/gpt/vbdev_gpt.o 00:02:09.802 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:09.802 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:10.085 CC module/bdev/error/vbdev_error_rpc.o 00:02:10.085 LIB libspdk_bdev_null.a 00:02:10.085 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:10.085 LIB libspdk_bdev_delay.a 00:02:10.085 LIB libspdk_blobfs_bdev.a 00:02:10.085 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:10.085 CC module/bdev/nvme/nvme_rpc.o 00:02:10.085 LIB libspdk_bdev_error.a 00:02:10.085 LIB libspdk_bdev_gpt.a 00:02:10.085 LIB libspdk_bdev_lvol.a 00:02:10.085 CC module/bdev/passthru/vbdev_passthru.o 00:02:10.085 CC module/bdev/raid/bdev_raid.o 00:02:10.347 CC module/bdev/split/vbdev_split.o 00:02:10.348 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:10.348 LIB libspdk_bdev_malloc.a 00:02:10.348 CC module/bdev/aio/bdev_aio.o 00:02:10.348 CC module/bdev/ftl/bdev_ftl.o 00:02:10.348 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:10.348 CC module/bdev/nvme/bdev_mdns_client.o 00:02:10.612 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:10.612 CC module/bdev/split/vbdev_split_rpc.o 00:02:10.612 LIB libspdk_bdev_passthru.a 00:02:10.612 CC module/bdev/nvme/vbdev_opal.o 00:02:10.612 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:10.612 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:10.612 CC module/bdev/aio/bdev_aio_rpc.o 00:02:10.612 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:10.612 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:10.871 LIB libspdk_bdev_ftl.a 00:02:10.871 LIB libspdk_bdev_split.a 00:02:10.871 CC module/bdev/raid/bdev_raid_rpc.o 00:02:10.871 CC module/bdev/raid/bdev_raid_sb.o 00:02:10.871 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:10.871 LIB libspdk_bdev_zone_block.a 00:02:10.871 LIB libspdk_bdev_aio.a 00:02:10.871 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:10.871 CC module/bdev/raid/raid0.o 00:02:10.871 CC module/bdev/daos/bdev_daos.o 00:02:10.871 CC module/bdev/daos/bdev_daos_rpc.o 00:02:10.871 CC module/bdev/raid/raid1.o 00:02:11.129 CC module/bdev/raid/concat.o 00:02:11.129 LIB libspdk_bdev_virtio.a 00:02:11.388 LIB libspdk_bdev_daos.a 00:02:11.388 LIB libspdk_bdev_raid.a 00:02:11.646 LIB libspdk_bdev_nvme.a 00:02:11.904 CC module/event/subsystems/keyring/keyring.o 00:02:11.904 CC module/event/subsystems/iobuf/iobuf.o 00:02:11.904 CC module/event/subsystems/scheduler/scheduler.o 00:02:11.904 CC module/event/subsystems/sock/sock.o 00:02:11.904 CC module/event/subsystems/vmd/vmd.o 00:02:11.904 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:11.904 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:11.904 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:12.162 LIB libspdk_event_keyring.a 00:02:12.162 LIB libspdk_event_scheduler.a 00:02:12.162 LIB libspdk_event_vhost_blk.a 00:02:12.162 LIB libspdk_event_sock.a 00:02:12.162 LIB libspdk_event_vmd.a 00:02:12.162 LIB libspdk_event_iobuf.a 00:02:12.420 CC module/event/subsystems/accel/accel.o 00:02:12.420 LIB libspdk_event_accel.a 00:02:12.678 CC module/event/subsystems/bdev/bdev.o 00:02:12.678 LIB libspdk_event_bdev.a 00:02:12.937 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:12.937 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:12.937 CC module/event/subsystems/scsi/scsi.o 00:02:12.937 CC module/event/subsystems/nbd/nbd.o 00:02:13.196 LIB libspdk_event_nbd.a 00:02:13.196 LIB libspdk_event_scsi.a 00:02:13.196 LIB libspdk_event_nvmf.a 00:02:13.504 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:13.504 CC module/event/subsystems/iscsi/iscsi.o 00:02:13.504 LIB libspdk_event_vhost_scsi.a 00:02:13.504 LIB libspdk_event_iscsi.a 00:02:13.777 CXX app/trace/trace.o 00:02:13.777 CC examples/ioat/perf/perf.o 00:02:13.777 CC examples/nvme/hello_world/hello_world.o 00:02:13.777 CC examples/vmd/lsvmd/lsvmd.o 00:02:13.777 CC examples/accel/perf/accel_perf.o 00:02:13.777 CC examples/sock/hello_world/hello_sock.o 00:02:13.777 CC test/accel/dif/dif.o 00:02:13.777 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.777 CC examples/blob/hello_world/hello_blob.o 00:02:13.777 CC examples/nvmf/nvmf/nvmf.o 00:02:14.035 LINK lsvmd 00:02:14.035 LINK ioat_perf 00:02:14.035 LINK hello_world 00:02:14.035 LINK hello_sock 00:02:14.035 LINK dif 00:02:14.035 LINK hello_bdev 00:02:14.035 LINK hello_blob 00:02:14.294 LINK nvmf 00:02:14.294 LINK accel_perf 00:02:14.294 LINK spdk_trace 00:02:14.554 CC app/trace_record/trace_record.o 00:02:14.554 CC examples/ioat/verify/verify.o 00:02:14.814 LINK spdk_trace_record 00:02:14.814 LINK verify 00:02:14.814 CC examples/util/zipf/zipf.o 00:02:14.814 CC examples/vmd/led/led.o 00:02:15.073 CC examples/nvme/reconnect/reconnect.o 00:02:15.073 LINK zipf 00:02:15.073 LINK led 00:02:15.073 CC examples/thread/thread/thread_ex.o 00:02:15.331 LINK reconnect 00:02:15.331 CC app/nvmf_tgt/nvmf_main.o 00:02:15.331 LINK thread 00:02:15.331 LINK nvmf_tgt 00:02:15.331 CC examples/idxd/perf/perf.o 00:02:15.590 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:15.590 LINK idxd_perf 00:02:15.848 LINK interrupt_tgt 00:02:16.106 CC app/iscsi_tgt/iscsi_tgt.o 00:02:16.106 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:16.106 LINK iscsi_tgt 00:02:16.365 CC examples/blob/cli/blobcli.o 00:02:16.365 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.365 CC test/app/bdev_svc/bdev_svc.o 00:02:16.365 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:16.365 LINK nvme_manage 00:02:16.623 LINK bdev_svc 00:02:16.623 LINK blobcli 00:02:16.623 CC test/bdev/bdevio/bdevio.o 00:02:16.623 LINK nvme_fuzz 00:02:16.882 LINK bdevperf 00:02:17.184 LINK bdevio 00:02:17.184 CC examples/nvme/arbitration/arbitration.o 00:02:17.442 CC examples/nvme/hotplug/hotplug.o 00:02:17.442 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:17.442 LINK arbitration 00:02:17.699 LINK hotplug 00:02:17.699 CC app/spdk_tgt/spdk_tgt.o 00:02:17.699 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:17.957 LINK spdk_tgt 00:02:17.957 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.215 LINK vhost_fuzz 00:02:18.473 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:18.473 LINK iscsi_fuzz 00:02:18.473 LINK cmb_copy 00:02:18.473 CC app/spdk_lspci/spdk_lspci.o 00:02:18.473 CC test/app/histogram_perf/histogram_perf.o 00:02:18.731 LINK spdk_lspci 00:02:18.731 CC app/spdk_nvme_perf/perf.o 00:02:18.731 CC examples/nvme/abort/abort.o 00:02:18.731 LINK histogram_perf 00:02:18.731 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:18.990 CC app/spdk_nvme_identify/identify.o 00:02:18.990 LINK abort 00:02:18.990 LINK pmr_persistence 00:02:19.248 CC test/blobfs/mkfs/mkfs.o 00:02:19.248 LINK spdk_nvme_perf 00:02:19.248 CC test/app/jsoncat/jsoncat.o 00:02:19.248 CC test/app/stub/stub.o 00:02:19.248 LINK mkfs 00:02:19.506 LINK jsoncat 00:02:19.506 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.506 LINK spdk_nvme_identify 00:02:19.506 CC app/spdk_top/spdk_top.o 00:02:19.506 LINK stub 00:02:19.506 LINK spdk_nvme_discover 00:02:19.764 CC app/vhost/vhost.o 00:02:20.023 CC app/spdk_dd/spdk_dd.o 00:02:20.023 CC app/fio/nvme/fio_plugin.o 00:02:20.023 LINK spdk_top 00:02:20.023 LINK vhost 00:02:20.282 TEST_HEADER include/spdk/accel_module.h 00:02:20.282 TEST_HEADER include/spdk/bit_pool.h 00:02:20.282 TEST_HEADER include/spdk/ioat.h 00:02:20.282 TEST_HEADER include/spdk/blobfs.h 00:02:20.282 TEST_HEADER include/spdk/notify.h 00:02:20.282 TEST_HEADER include/spdk/pipe.h 00:02:20.282 TEST_HEADER include/spdk/accel.h 00:02:20.282 TEST_HEADER include/spdk/mmio.h 00:02:20.282 TEST_HEADER include/spdk/version.h 00:02:20.282 TEST_HEADER include/spdk/trace_parser.h 00:02:20.282 TEST_HEADER include/spdk/opal_spec.h 00:02:20.282 TEST_HEADER include/spdk/nvmf.h 00:02:20.282 TEST_HEADER include/spdk/bdev.h 00:02:20.282 TEST_HEADER include/spdk/hexlify.h 00:02:20.282 TEST_HEADER include/spdk/likely.h 00:02:20.282 TEST_HEADER include/spdk/keyring_module.h 00:02:20.282 TEST_HEADER include/spdk/memory.h 00:02:20.282 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:20.282 TEST_HEADER include/spdk/dma.h 00:02:20.282 TEST_HEADER include/spdk/nbd.h 00:02:20.282 TEST_HEADER include/spdk/env.h 00:02:20.282 TEST_HEADER include/spdk/nvme_zns.h 00:02:20.282 TEST_HEADER include/spdk/env_dpdk.h 00:02:20.282 TEST_HEADER include/spdk/init.h 00:02:20.282 TEST_HEADER include/spdk/fd_group.h 00:02:20.282 TEST_HEADER include/spdk/bdev_module.h 00:02:20.282 TEST_HEADER include/spdk/opal.h 00:02:20.282 LINK spdk_dd 00:02:20.282 TEST_HEADER include/spdk/event.h 00:02:20.282 TEST_HEADER include/spdk/keyring.h 00:02:20.282 TEST_HEADER include/spdk/base64.h 00:02:20.282 TEST_HEADER include/spdk/nvme_intel.h 00:02:20.282 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:20.282 TEST_HEADER include/spdk/vhost.h 00:02:20.282 TEST_HEADER include/spdk/fd.h 00:02:20.282 TEST_HEADER include/spdk/barrier.h 00:02:20.282 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:20.282 TEST_HEADER include/spdk/zipf.h 00:02:20.282 TEST_HEADER include/spdk/scheduler.h 00:02:20.282 TEST_HEADER include/spdk/dif.h 00:02:20.282 TEST_HEADER include/spdk/scsi_spec.h 00:02:20.282 TEST_HEADER include/spdk/blob.h 00:02:20.282 TEST_HEADER include/spdk/cpuset.h 00:02:20.282 TEST_HEADER include/spdk/thread.h 00:02:20.282 TEST_HEADER include/spdk/tree.h 00:02:20.282 TEST_HEADER include/spdk/xor.h 00:02:20.282 TEST_HEADER include/spdk/assert.h 00:02:20.282 TEST_HEADER include/spdk/file.h 00:02:20.282 TEST_HEADER include/spdk/endian.h 00:02:20.282 TEST_HEADER include/spdk/pci_ids.h 00:02:20.282 TEST_HEADER include/spdk/util.h 00:02:20.282 TEST_HEADER include/spdk/log.h 00:02:20.282 TEST_HEADER include/spdk/sock.h 00:02:20.282 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:20.282 TEST_HEADER include/spdk/config.h 00:02:20.282 TEST_HEADER include/spdk/histogram_data.h 00:02:20.282 TEST_HEADER include/spdk/nvmf_spec.h 00:02:20.282 TEST_HEADER include/spdk/idxd_spec.h 00:02:20.282 TEST_HEADER include/spdk/crc16.h 00:02:20.282 TEST_HEADER include/spdk/bdev_zone.h 00:02:20.282 TEST_HEADER include/spdk/stdinc.h 00:02:20.282 TEST_HEADER include/spdk/scsi.h 00:02:20.282 TEST_HEADER include/spdk/jsonrpc.h 00:02:20.282 TEST_HEADER include/spdk/blob_bdev.h 00:02:20.282 TEST_HEADER include/spdk/crc32.h 00:02:20.282 TEST_HEADER include/spdk/nvmf_transport.h 00:02:20.282 TEST_HEADER include/spdk/vmd.h 00:02:20.282 TEST_HEADER include/spdk/uuid.h 00:02:20.282 TEST_HEADER include/spdk/idxd.h 00:02:20.282 TEST_HEADER include/spdk/crc64.h 00:02:20.282 TEST_HEADER include/spdk/nvme.h 00:02:20.282 TEST_HEADER include/spdk/iscsi_spec.h 00:02:20.282 TEST_HEADER include/spdk/queue.h 00:02:20.282 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:20.282 TEST_HEADER include/spdk/lvol.h 00:02:20.282 TEST_HEADER include/spdk/ftl.h 00:02:20.282 TEST_HEADER include/spdk/trace.h 00:02:20.282 TEST_HEADER include/spdk/ioat_spec.h 00:02:20.282 TEST_HEADER include/spdk/conf.h 00:02:20.282 TEST_HEADER include/spdk/ublk.h 00:02:20.282 TEST_HEADER include/spdk/bit_array.h 00:02:20.282 TEST_HEADER include/spdk/nvme_spec.h 00:02:20.282 TEST_HEADER include/spdk/string.h 00:02:20.282 TEST_HEADER include/spdk/gpt_spec.h 00:02:20.282 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:20.282 TEST_HEADER include/spdk/json.h 00:02:20.282 TEST_HEADER include/spdk/reduce.h 00:02:20.282 TEST_HEADER include/spdk/rpc.h 00:02:20.282 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:20.282 CXX test/cpp_headers/accel_module.o 00:02:20.282 CXX test/cpp_headers/bit_pool.o 00:02:20.540 CC test/dma/test_dma/test_dma.o 00:02:20.540 CC test/env/mem_callbacks/mem_callbacks.o 00:02:20.540 LINK spdk_nvme 00:02:20.540 CXX test/cpp_headers/ioat.o 00:02:20.540 CXX test/cpp_headers/blobfs.o 00:02:20.540 CC test/event/event_perf/event_perf.o 00:02:20.540 CXX test/cpp_headers/notify.o 00:02:20.540 CXX test/cpp_headers/pipe.o 00:02:20.540 LINK event_perf 00:02:20.799 CXX test/cpp_headers/accel.o 00:02:20.799 CC test/env/vtophys/vtophys.o 00:02:20.799 LINK test_dma 00:02:20.799 CXX test/cpp_headers/mmio.o 00:02:20.799 LINK vtophys 00:02:20.799 LINK mem_callbacks 00:02:20.799 CXX test/cpp_headers/version.o 00:02:20.799 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:21.058 CXX test/cpp_headers/trace_parser.o 00:02:21.058 LINK env_dpdk_post_init 00:02:21.058 CXX test/cpp_headers/opal_spec.o 00:02:21.316 CXX test/cpp_headers/nvmf.o 00:02:21.316 CC test/env/memory/memory_ut.o 00:02:21.316 CC test/event/reactor/reactor.o 00:02:21.316 CC app/fio/bdev/fio_plugin.o 00:02:21.316 CXX test/cpp_headers/bdev.o 00:02:21.574 LINK reactor 00:02:21.574 CC test/env/pci/pci_ut.o 00:02:21.574 CC test/lvol/esnap/esnap.o 00:02:21.574 CXX test/cpp_headers/hexlify.o 00:02:21.832 CXX test/cpp_headers/likely.o 00:02:21.832 LINK pci_ut 00:02:21.832 LINK spdk_bdev 00:02:21.832 CXX test/cpp_headers/keyring_module.o 00:02:21.832 CXX test/cpp_headers/memory.o 00:02:21.832 LINK memory_ut 00:02:21.832 CXX test/cpp_headers/vfio_user_pci.o 00:02:22.090 CC test/event/reactor_perf/reactor_perf.o 00:02:22.090 CXX test/cpp_headers/dma.o 00:02:22.090 CC test/event/app_repeat/app_repeat.o 00:02:22.090 CC test/event/scheduler/scheduler.o 00:02:22.090 LINK reactor_perf 00:02:22.090 CXX test/cpp_headers/nbd.o 00:02:22.090 CXX test/cpp_headers/env.o 00:02:22.348 LINK app_repeat 00:02:22.348 CXX test/cpp_headers/nvme_zns.o 00:02:22.348 CC test/nvme/aer/aer.o 00:02:22.348 CC test/nvme/reset/reset.o 00:02:22.348 CXX test/cpp_headers/env_dpdk.o 00:02:22.348 LINK scheduler 00:02:22.348 CXX test/cpp_headers/init.o 00:02:22.606 CXX test/cpp_headers/fd_group.o 00:02:22.606 LINK aer 00:02:22.606 CXX test/cpp_headers/bdev_module.o 00:02:22.606 LINK reset 00:02:22.606 CXX test/cpp_headers/opal.o 00:02:22.606 CC test/rpc_client/rpc_client_test.o 00:02:22.864 CXX test/cpp_headers/event.o 00:02:22.864 CC test/thread/poller_perf/poller_perf.o 00:02:22.864 LINK rpc_client_test 00:02:22.864 CXX test/cpp_headers/keyring.o 00:02:22.864 CC test/thread/lock/spdk_lock.o 00:02:22.864 LINK poller_perf 00:02:23.123 CXX test/cpp_headers/base64.o 00:02:23.123 CXX test/cpp_headers/nvme_intel.o 00:02:23.381 CXX test/cpp_headers/blobfs_bdev.o 00:02:23.381 CXX test/cpp_headers/vhost.o 00:02:23.382 CXX test/cpp_headers/fd.o 00:02:23.382 CC test/nvme/sgl/sgl.o 00:02:23.640 CC test/nvme/e2edp/nvme_dp.o 00:02:23.640 CXX test/cpp_headers/barrier.o 00:02:23.640 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.640 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:23.640 CXX test/cpp_headers/zipf.o 00:02:23.640 LINK sgl 00:02:23.640 LINK nvme_dp 00:02:23.640 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:23.898 LINK spdk_lock 00:02:23.898 CXX test/cpp_headers/scheduler.o 00:02:23.898 CXX test/cpp_headers/dif.o 00:02:23.898 LINK histogram_ut 00:02:23.898 CXX test/cpp_headers/scsi_spec.o 00:02:23.898 CXX test/cpp_headers/blob.o 00:02:23.898 CXX test/cpp_headers/cpuset.o 00:02:23.898 CXX test/cpp_headers/thread.o 00:02:23.899 CXX test/cpp_headers/tree.o 00:02:24.157 CXX test/cpp_headers/xor.o 00:02:24.157 CXX test/cpp_headers/assert.o 00:02:24.157 CXX test/cpp_headers/file.o 00:02:24.157 CXX test/cpp_headers/endian.o 00:02:24.157 CXX test/cpp_headers/pci_ids.o 00:02:24.157 CXX test/cpp_headers/util.o 00:02:24.415 CXX test/cpp_headers/log.o 00:02:24.415 CXX test/cpp_headers/sock.o 00:02:24.415 LINK esnap 00:02:24.415 CC test/nvme/overhead/overhead.o 00:02:24.415 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.415 CXX test/cpp_headers/config.o 00:02:24.415 CXX test/cpp_headers/histogram_data.o 00:02:24.415 CXX test/cpp_headers/nvmf_spec.o 00:02:24.415 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:24.415 CXX test/cpp_headers/idxd_spec.o 00:02:24.673 CC test/nvme/err_injection/err_injection.o 00:02:24.673 CXX test/cpp_headers/crc16.o 00:02:24.673 CXX test/cpp_headers/bdev_zone.o 00:02:24.673 LINK overhead 00:02:24.673 CXX test/cpp_headers/stdinc.o 00:02:24.673 CC test/nvme/startup/startup.o 00:02:24.673 CXX test/cpp_headers/scsi.o 00:02:24.673 LINK err_injection 00:02:24.673 CXX test/cpp_headers/jsonrpc.o 00:02:24.673 CXX test/cpp_headers/blob_bdev.o 00:02:24.673 CXX test/cpp_headers/crc32.o 00:02:24.931 CC test/nvme/reserve/reserve.o 00:02:24.931 LINK startup 00:02:24.931 CXX test/cpp_headers/nvmf_transport.o 00:02:24.931 CXX test/cpp_headers/vmd.o 00:02:24.931 CXX test/cpp_headers/uuid.o 00:02:24.931 CXX test/cpp_headers/idxd.o 00:02:24.931 LINK reserve 00:02:24.931 CXX test/cpp_headers/crc64.o 00:02:25.189 CXX test/cpp_headers/nvme.o 00:02:25.189 CXX test/cpp_headers/iscsi_spec.o 00:02:25.189 CXX test/cpp_headers/queue.o 00:02:25.189 LINK accel_ut 00:02:25.189 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.189 CC test/unit/lib/bdev/part.c/part_ut.o 00:02:25.189 CXX test/cpp_headers/lvol.o 00:02:25.189 CXX test/cpp_headers/ftl.o 00:02:25.447 CXX test/cpp_headers/trace.o 00:02:25.447 CXX test/cpp_headers/ioat_spec.o 00:02:25.447 CXX test/cpp_headers/conf.o 00:02:25.447 CXX test/cpp_headers/ublk.o 00:02:25.447 CC test/nvme/simple_copy/simple_copy.o 00:02:25.447 CC test/nvme/connect_stress/connect_stress.o 00:02:25.705 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:25.705 CC test/nvme/boot_partition/boot_partition.o 00:02:25.705 CXX test/cpp_headers/bit_array.o 00:02:25.705 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:02:25.705 LINK simple_copy 00:02:25.705 LINK connect_stress 00:02:25.705 CC test/nvme/compliance/nvme_compliance.o 00:02:25.705 LINK boot_partition 00:02:25.705 CXX test/cpp_headers/nvme_spec.o 00:02:25.963 LINK scsi_nvme_ut 00:02:25.963 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.963 CXX test/cpp_headers/string.o 00:02:25.963 LINK blob_bdev_ut 00:02:25.963 LINK nvme_compliance 00:02:25.963 CXX test/cpp_headers/gpt_spec.o 00:02:26.221 LINK fused_ordering 00:02:26.221 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:02:26.221 CXX test/cpp_headers/nvme_ocssd.o 00:02:26.221 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:26.480 CXX test/cpp_headers/json.o 00:02:26.480 LINK gpt_ut 00:02:26.480 CXX test/cpp_headers/reduce.o 00:02:26.480 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.480 CC test/nvme/fdp/fdp.o 00:02:26.738 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:02:26.738 CC test/nvme/cuse/cuse.o 00:02:26.738 CXX test/cpp_headers/rpc.o 00:02:26.738 LINK doorbell_aers 00:02:26.738 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.996 LINK fdp 00:02:26.996 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:26.996 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:26.996 LINK tree_ut 00:02:26.996 CC test/unit/lib/event/app.c/app_ut.o 00:02:26.996 LINK part_ut 00:02:27.254 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:27.254 LINK dma_ut 00:02:27.512 LINK bdev_ut 00:02:27.512 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:02:27.512 LINK cuse 00:02:27.512 LINK vbdev_lvol_ut 00:02:27.512 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:02:27.512 LINK app_ut 00:02:27.512 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:27.771 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:02:27.771 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:02:27.771 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:02:27.771 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:02:27.771 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:02:27.771 LINK blobfs_bdev_ut 00:02:27.771 LINK ioat_ut 00:02:28.029 LINK bdev_zone_ut 00:02:28.029 LINK blobfs_async_ut 00:02:28.029 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:02:28.029 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:02:28.293 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:02:28.293 LINK vbdev_zone_block_ut 00:02:28.293 LINK blobfs_sync_ut 00:02:28.293 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:02:28.293 LINK bdev_raid_sb_ut 00:02:28.293 LINK reactor_ut 00:02:28.565 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:02:28.565 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:28.565 LINK init_grp_ut 00:02:28.565 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:02:28.823 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:28.823 CC test/unit/lib/iscsi/param.c/param_ut.o 00:02:28.823 LINK conn_ut 00:02:28.823 LINK bdev_raid_ut 00:02:29.082 LINK concat_ut 00:02:29.082 LINK jsonrpc_server_ut 00:02:29.082 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:02:29.082 LINK param_ut 00:02:29.082 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:02:29.340 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:29.340 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:02:29.340 CC test/unit/lib/log/log.c/log_ut.o 00:02:29.340 LINK bdev_ut 00:02:29.599 LINK portal_grp_ut 00:02:29.599 LINK raid1_ut 00:02:29.599 LINK log_ut 00:02:29.599 LINK json_parse_ut 00:02:29.599 LINK json_util_ut 00:02:29.599 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:29.857 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:02:29.857 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:29.857 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:02:29.857 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:29.857 LINK tgt_node_ut 00:02:29.857 LINK iscsi_ut 00:02:29.857 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:30.114 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:30.114 LINK notify_ut 00:02:30.114 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:02:30.114 LINK blob_ut 00:02:30.372 LINK json_write_ut 00:02:30.372 CC test/unit/lib/sock/sock.c/sock_ut.o 00:02:30.630 LINK dev_ut 00:02:30.630 CC test/unit/lib/thread/thread.c/thread_ut.o 00:02:30.630 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:30.630 LINK bdev_nvme_ut 00:02:30.630 LINK nvme_ut 00:02:30.630 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:02:30.630 LINK base64_ut 00:02:30.887 LINK lvol_ut 00:02:30.888 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:30.888 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:30.888 LINK nvme_ctrlr_cmd_ut 00:02:30.888 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:02:31.145 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:02:31.145 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:31.145 LINK lun_ut 00:02:31.145 LINK pci_event_ut 00:02:31.145 LINK bit_array_ut 00:02:31.145 LINK sock_ut 00:02:31.403 LINK subsystem_ut 00:02:31.403 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:02:31.403 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:02:31.403 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:31.403 CC test/unit/lib/sock/posix.c/posix_ut.o 00:02:31.403 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:02:31.661 LINK thread_ut 00:02:31.661 LINK cpuset_ut 00:02:31.661 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:02:31.661 LINK scsi_ut 00:02:31.661 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:31.661 LINK nvme_ns_ut 00:02:31.918 LINK nvme_ctrlr_ut 00:02:31.919 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:02:31.919 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:02:31.919 LINK rpc_ut 00:02:31.919 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:02:31.919 LINK tcp_ut 00:02:31.919 LINK rpc_ut 00:02:31.919 LINK crc16_ut 00:02:31.919 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:02:31.919 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:02:32.176 LINK posix_ut 00:02:32.176 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:02:32.176 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:02:32.176 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:02:32.176 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:02:32.176 LINK iobuf_ut 00:02:32.434 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:02:32.434 LINK keyring_ut 00:02:32.434 LINK scsi_bdev_ut 00:02:32.434 LINK idxd_user_ut 00:02:32.434 LINK crc32_ieee_ut 00:02:32.693 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:02:32.693 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:02:32.693 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:02:32.693 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:32.693 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:02:32.951 LINK idxd_ut 00:02:32.951 LINK nvme_ns_cmd_ut 00:02:32.951 LINK nvme_ns_ocssd_cmd_ut 00:02:32.951 LINK crc64_ut 00:02:32.951 LINK crc32c_ut 00:02:33.209 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:02:33.209 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:02:33.209 LINK scsi_pr_ut 00:02:33.209 LINK common_ut 00:02:33.209 LINK nvme_pcie_ut 00:02:33.209 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:02:33.209 CC test/unit/lib/util/dif.c/dif_ut.o 00:02:33.209 CC test/unit/lib/util/iov.c/iov_ut.o 00:02:33.209 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:02:33.468 LINK ftl_l2p_ut 00:02:33.468 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:02:33.468 CC test/unit/lib/util/math.c/math_ut.o 00:02:33.468 LINK iov_ut 00:02:33.468 LINK math_ut 00:02:33.468 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:02:33.726 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:02:33.726 LINK pipe_ut 00:02:33.726 LINK nvme_poll_group_ut 00:02:33.726 CC test/unit/lib/util/string.c/string_ut.o 00:02:33.726 LINK nvme_quirks_ut 00:02:33.726 LINK vhost_ut 00:02:33.726 CC test/unit/lib/util/xor.c/xor_ut.o 00:02:33.984 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:02:33.984 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:02:33.984 LINK string_ut 00:02:33.984 LINK nvme_qpair_ut 00:02:33.984 LINK ctrlr_ut 00:02:33.984 LINK xor_ut 00:02:33.984 LINK dif_ut 00:02:33.984 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:02:34.242 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:02:34.242 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:02:34.242 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:02:34.242 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:02:34.242 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:02:34.242 LINK ftl_band_ut 00:02:34.500 LINK ftl_bitmap_ut 00:02:34.500 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:02:34.500 LINK nvme_io_msg_ut 00:02:34.500 LINK ftl_io_ut 00:02:34.500 LINK nvme_transport_ut 00:02:34.500 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:02:34.764 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:02:34.764 LINK ftl_mempool_ut 00:02:34.764 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:02:34.764 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:02:35.024 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:02:35.024 LINK nvme_fabric_ut 00:02:35.024 LINK ftl_mngt_ut 00:02:35.024 LINK nvme_pcie_common_ut 00:02:35.024 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:02:35.282 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:02:35.282 LINK nvme_tcp_ut 00:02:35.282 LINK nvme_opal_ut 00:02:35.282 LINK ctrlr_discovery_ut 00:02:35.282 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:02:35.282 LINK subsystem_ut 00:02:35.632 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:02:35.632 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:02:35.632 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:02:35.632 LINK ftl_sb_ut 00:02:35.632 LINK ftl_layout_upgrade_ut 00:02:35.632 LINK ctrlr_bdev_ut 00:02:35.889 LINK nvmf_ut 00:02:36.148 LINK nvme_cuse_ut 00:02:36.148 LINK nvme_rdma_ut 00:02:36.148 LINK auth_ut 00:02:37.083 LINK transport_ut 00:02:37.083 LINK rdma_ut 00:02:37.342 ************************************ 00:02:37.342 END TEST unittest_build 00:02:37.342 ************************************ 00:02:37.342 00:02:37.342 real 1m8.853s 00:02:37.342 user 6m14.652s 00:02:37.342 sys 1m31.532s 00:02:37.342 23:18:00 unittest_build -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:37.342 23:18:00 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:02:37.342 23:18:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:37.342 23:18:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:37.342 23:18:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:37.342 23:18:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.342 23:18:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:37.342 23:18:00 -- pm/common@44 -- $ pid=2673 00:02:37.342 23:18:00 -- pm/common@50 -- $ kill -TERM 2673 00:02:37.342 23:18:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.342 23:18:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:37.342 23:18:00 -- pm/common@44 -- $ pid=2674 00:02:37.342 23:18:00 -- pm/common@50 -- $ kill -TERM 2674 00:02:37.342 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:37.602 23:18:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:37.602 23:18:00 -- nvmf/common.sh@7 -- # uname -s 00:02:37.602 23:18:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:37.602 23:18:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:37.602 23:18:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:37.602 23:18:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:37.602 23:18:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:37.602 23:18:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:37.602 23:18:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:37.602 23:18:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:37.602 23:18:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:37.602 23:18:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:37.602 23:18:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7654b25-d94a-4033-a98e-8f3ea7c9dcf2 00:02:37.602 23:18:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=f7654b25-d94a-4033-a98e-8f3ea7c9dcf2 00:02:37.602 23:18:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:37.602 23:18:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:37.602 23:18:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:37.602 23:18:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:37.602 23:18:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:37.602 23:18:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:37.602 23:18:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.602 23:18:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.602 23:18:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:37.602 23:18:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:37.602 23:18:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:37.602 23:18:00 -- paths/export.sh@5 -- # export PATH 00:02:37.602 23:18:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:37.602 23:18:00 -- nvmf/common.sh@47 -- # : 0 00:02:37.602 23:18:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:37.602 23:18:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:37.602 23:18:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:37.602 23:18:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:37.602 23:18:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:37.602 23:18:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:37.602 23:18:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:37.602 23:18:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:37.602 23:18:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:37.602 23:18:00 -- spdk/autotest.sh@32 -- # uname -s 00:02:37.602 23:18:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:37.602 23:18:00 -- spdk/autotest.sh@33 -- # old_core_pattern=core 00:02:37.602 23:18:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:37.602 23:18:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:37.602 23:18:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:37.602 23:18:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:37.602 modprobe: FATAL: Module nbd not found. 00:02:37.602 23:18:00 -- spdk/autotest.sh@44 -- # true 00:02:37.602 23:18:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:37.602 23:18:00 -- spdk/autotest.sh@46 -- # udevadm=/sbin/udevadm 00:02:37.602 23:18:00 -- spdk/autotest.sh@48 -- # udevadm_pid=36907 00:02:37.602 23:18:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:37.602 23:18:00 -- pm/common@17 -- # local monitor 00:02:37.602 23:18:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.602 23:18:00 -- spdk/autotest.sh@47 -- # /sbin/udevadm monitor --property 00:02:37.602 23:18:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.602 23:18:00 -- pm/common@25 -- # sleep 1 00:02:37.602 23:18:00 -- pm/common@21 -- # date +%s 00:02:37.602 23:18:00 -- pm/common@21 -- # date +%s 00:02:37.602 23:18:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715728680 00:02:37.602 23:18:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715728680 00:02:37.602 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715728680_collect-vmstat.pm.log 00:02:37.602 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715728680_collect-cpu-load.pm.log 00:02:38.538 23:18:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:38.538 23:18:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:38.538 23:18:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:38.538 23:18:01 -- common/autotest_common.sh@10 -- # set +x 00:02:38.538 23:18:01 -- spdk/autotest.sh@59 -- # create_test_list 00:02:38.538 23:18:01 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:38.538 23:18:01 -- common/autotest_common.sh@10 -- # set +x 00:02:38.538 23:18:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:38.538 23:18:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:38.538 23:18:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:38.538 23:18:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:38.538 23:18:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:38.538 23:18:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:38.538 23:18:01 -- common/autotest_common.sh@1451 -- # uname 00:02:38.538 23:18:01 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:38.538 23:18:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:38.538 23:18:01 -- common/autotest_common.sh@1471 -- # uname 00:02:38.538 23:18:01 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:38.538 23:18:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:38.538 23:18:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:38.538 23:18:01 -- spdk/autotest.sh@72 -- # hash lcov 00:02:38.538 23:18:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:38.538 23:18:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:38.538 --rc lcov_branch_coverage=1 00:02:38.538 --rc lcov_function_coverage=1 00:02:38.538 --rc genhtml_branch_coverage=1 00:02:38.538 --rc genhtml_function_coverage=1 00:02:38.538 --rc genhtml_legend=1 00:02:38.538 --rc geninfo_all_blocks=1 00:02:38.538 ' 00:02:38.538 23:18:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:38.538 --rc lcov_branch_coverage=1 00:02:38.538 --rc lcov_function_coverage=1 00:02:38.538 --rc genhtml_branch_coverage=1 00:02:38.538 --rc genhtml_function_coverage=1 00:02:38.538 --rc genhtml_legend=1 00:02:38.538 --rc geninfo_all_blocks=1 00:02:38.538 ' 00:02:38.538 23:18:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:38.538 --rc lcov_branch_coverage=1 00:02:38.538 --rc lcov_function_coverage=1 00:02:38.538 --rc genhtml_branch_coverage=1 00:02:38.538 --rc genhtml_function_coverage=1 00:02:38.538 --rc genhtml_legend=1 00:02:38.538 --rc geninfo_all_blocks=1 00:02:38.538 --no-external' 00:02:38.538 23:18:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:38.538 --rc lcov_branch_coverage=1 00:02:38.538 --rc lcov_function_coverage=1 00:02:38.538 --rc genhtml_branch_coverage=1 00:02:38.538 --rc genhtml_function_coverage=1 00:02:38.538 --rc genhtml_legend=1 00:02:38.538 --rc geninfo_all_blocks=1 00:02:38.538 --no-external' 00:02:38.538 23:18:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:38.796 lcov: LCOV version 1.15 00:02:38.796 23:18:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:46.912 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:46.912 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:46.912 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:46.912 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:46.912 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:46.912 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:08.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:08.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:08.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:08.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:08.978 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:08.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:08.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:08.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:55.734 23:19:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:55.734 23:19:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:55.734 23:19:11 -- common/autotest_common.sh@10 -- # set +x 00:03:55.734 23:19:11 -- spdk/autotest.sh@91 -- # rm -f 00:03:55.734 23:19:11 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.735 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:55.735 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:55.735 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.735 23:19:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:55.735 23:19:12 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:55.735 23:19:12 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:55.735 23:19:12 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:55.735 23:19:12 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:55.735 23:19:12 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:55.735 23:19:12 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:55.735 23:19:12 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.735 23:19:12 -- common/autotest_common.sh@1660 -- # return 1 00:03:55.735 23:19:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:55.735 23:19:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.735 23:19:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:55.735 23:19:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:55.735 23:19:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:55.735 23:19:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.735 No valid GPT data, bailing 00:03:55.735 23:19:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.735 23:19:12 -- scripts/common.sh@391 -- # pt= 00:03:55.735 23:19:12 -- scripts/common.sh@392 -- # return 1 00:03:55.735 23:19:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.735 1+0 records in 00:03:55.735 1+0 records out 00:03:55.735 1048576 bytes (1.0 MB) copied, 0.003972 s, 264 MB/s 00:03:55.735 23:19:12 -- spdk/autotest.sh@118 -- # sync 00:03:55.735 23:19:12 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.735 23:19:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.735 23:19:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:55.735 23:19:13 -- spdk/autotest.sh@124 -- # uname -s 00:03:55.735 23:19:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:55.735 23:19:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:55.735 23:19:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.735 23:19:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.735 23:19:13 -- common/autotest_common.sh@10 -- # set +x 00:03:55.735 ************************************ 00:03:55.735 START TEST setup.sh 00:03:55.735 ************************************ 00:03:55.735 23:19:13 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:55.735 * Looking for test storage... 00:03:55.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:55.735 23:19:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:55.735 23:19:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:55.735 23:19:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:55.735 23:19:13 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.735 23:19:13 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.735 23:19:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.735 ************************************ 00:03:55.735 START TEST acl 00:03:55.735 ************************************ 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:55.735 * Looking for test storage... 00:03:55.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # return 1 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:55.735 23:19:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:55.735 23:19:13 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:55.735 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.735 Hugepages 00:03:55.735 node hugesize free / total 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.735 00:03:55.735 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.735 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:55.735 23:19:13 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.735 23:19:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.735 ************************************ 00:03:55.735 START TEST denied 00:03:55.735 ************************************ 00:03:55.735 23:19:13 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:55.735 23:19:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:55.735 23:19:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:55.735 23:19:13 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:55.735 23:19:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.735 23:19:13 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.735 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.735 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.735 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.735 23:19:14 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.735 00:03:55.735 real 0m0.545s 00:03:55.735 user 0m0.301s 00:03:55.735 sys 0m0.284s 00:03:55.735 ************************************ 00:03:55.735 END TEST denied 00:03:55.735 ************************************ 00:03:55.735 23:19:14 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.735 23:19:14 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:55.735 23:19:14 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:55.735 23:19:14 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.735 23:19:14 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.735 23:19:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.735 ************************************ 00:03:55.735 START TEST allowed 00:03:55.735 ************************************ 00:03:55.735 23:19:14 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:55.735 23:19:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:55.735 23:19:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:55.735 23:19:14 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:55.735 23:19:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.735 23:19:14 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.735 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.735 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.736 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.736 23:19:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:55.736 23:19:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:55.736 23:19:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:55.736 23:19:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.736 23:19:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.736 ************************************ 00:03:55.736 END TEST allowed 00:03:55.736 ************************************ 00:03:55.736 00:03:55.736 real 0m0.690s 00:03:55.736 user 0m0.249s 00:03:55.736 sys 0m0.414s 00:03:55.736 23:19:15 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.736 23:19:15 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:55.736 00:03:55.736 real 0m1.952s 00:03:55.736 user 0m0.930s 00:03:55.736 sys 0m1.067s 00:03:55.736 23:19:15 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.736 23:19:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.736 ************************************ 00:03:55.736 END TEST acl 00:03:55.736 ************************************ 00:03:55.736 23:19:15 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:55.736 23:19:15 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.736 23:19:15 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.736 23:19:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.736 ************************************ 00:03:55.736 START TEST hugepages 00:03:55.736 ************************************ 00:03:55.736 23:19:15 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:55.736 * Looking for test storage... 00:03:55.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 4772052 kB' 'MemAvailable: 7434320 kB' 'Buffers: 2068 kB' 'Cached: 2853384 kB' 'SwapCached: 0 kB' 'Active: 2212644 kB' 'Inactive: 729780 kB' 'Active(anon): 87180 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125464 kB' 'Inactive(file): 713096 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 86656 kB' 'Mapped: 25628 kB' 'Shmem: 16892 kB' 'Slab: 171296 kB' 'SReclaimable: 122652 kB' 'SUnreclaim: 48644 kB' 'KernelStack: 3648 kB' 'PageTables: 8308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4053424 kB' 'Committed_AS: 343400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38768 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.736 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.737 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:55.737 23:19:15 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.737 23:19:15 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.737 23:19:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.737 ************************************ 00:03:55.737 START TEST default_setup 00:03:55.737 ************************************ 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.737 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.738 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:55.738 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.738 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6866960 kB' 'MemAvailable: 9529228 kB' 'Buffers: 2068 kB' 'Cached: 2853384 kB' 'SwapCached: 0 kB' 'Active: 2220784 kB' 'Inactive: 729780 kB' 'Active(anon): 95320 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125464 kB' 'Inactive(file): 713096 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 94620 kB' 'Mapped: 25628 kB' 'Shmem: 16892 kB' 'Slab: 171296 kB' 'SReclaimable: 122652 kB' 'SUnreclaim: 48644 kB' 'KernelStack: 3648 kB' 'PageTables: 8404 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.738 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 4096 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=4096 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867612 kB' 'MemAvailable: 9529880 kB' 'Buffers: 2068 kB' 'Cached: 2853384 kB' 'SwapCached: 0 kB' 'Active: 2220784 kB' 'Inactive: 729780 kB' 'Active(anon): 95320 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125464 kB' 'Inactive(file): 713096 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 94620 kB' 'Mapped: 25628 kB' 'Shmem: 16892 kB' 'Slab: 171296 kB' 'SReclaimable: 122652 kB' 'SUnreclaim: 48644 kB' 'KernelStack: 3648 kB' 'PageTables: 8404 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.739 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.740 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867808 kB' 'MemAvailable: 9530076 kB' 'Buffers: 2068 kB' 'Cached: 2853384 kB' 'SwapCached: 0 kB' 'Active: 2220588 kB' 'Inactive: 729780 kB' 'Active(anon): 95124 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125464 kB' 'Inactive(file): 713096 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 94328 kB' 'Mapped: 25628 kB' 'Shmem: 16892 kB' 'Slab: 171296 kB' 'SReclaimable: 122652 kB' 'SUnreclaim: 48644 kB' 'KernelStack: 3648 kB' 'PageTables: 8404 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.741 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:55.742 nr_hugepages=1024 00:03:55.742 resv_hugepages=0 00:03:55.742 surplus_hugepages=0 00:03:55.742 anon_hugepages=4096 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=4096 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.742 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867768 kB' 'MemAvailable: 9530036 kB' 'Buffers: 2068 kB' 'Cached: 2853384 kB' 'SwapCached: 0 kB' 'Active: 2220588 kB' 'Inactive: 729780 kB' 'Active(anon): 95124 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125464 kB' 'Inactive(file): 713096 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 94716 kB' 'Mapped: 25628 kB' 'Shmem: 16892 kB' 'Slab: 171296 kB' 'SReclaimable: 122652 kB' 'SUnreclaim: 48644 kB' 'KernelStack: 3648 kB' 'PageTables: 8016 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.743 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867708 kB' 'MemUsed: 5433444 kB' 'Active: 2220588 kB' 'Inactive: 729780 kB' 'Active(anon): 95124 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125464 kB' 'Inactive(file): 713096 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'FilePages: 2855452 kB' 'Mapped: 25628 kB' 'AnonPages: 94328 kB' 'Shmem: 16892 kB' 'KernelStack: 3648 kB' 'PageTables: 8016 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171296 kB' 'SReclaimable: 122652 kB' 'SUnreclaim: 48644 kB' 'AnonHugePages: 4096 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.744 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:55.745 node0=1024 expecting 1024 00:03:55.745 ************************************ 00:03:55.745 END TEST default_setup 00:03:55.745 ************************************ 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.745 00:03:55.745 real 0m0.452s 00:03:55.745 user 0m0.197s 00:03:55.745 sys 0m0.239s 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.745 23:19:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:55.745 23:19:15 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:55.745 23:19:15 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.746 23:19:15 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.746 23:19:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.746 ************************************ 00:03:55.746 START TEST per_node_1G_alloc 00:03:55.746 ************************************ 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.746 23:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.746 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:55.746 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.746 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7916180 kB' 'MemAvailable: 10578636 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220192 kB' 'Inactive: 729840 kB' 'Active(anon): 94680 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94628 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 7780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.746 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 4096 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=4096 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7915816 kB' 'MemAvailable: 10578272 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220452 kB' 'Inactive: 729840 kB' 'Active(anon): 94940 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94628 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 7780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.747 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.748 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.749 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7915816 kB' 'MemAvailable: 10578272 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220452 kB' 'Inactive: 729840 kB' 'Active(anon): 94940 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94532 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 7780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.751 nr_hugepages=512 00:03:55.751 resv_hugepages=0 00:03:55.751 surplus_hugepages=0 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.751 anon_hugepages=4096 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=4096 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.751 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7915528 kB' 'MemAvailable: 10577984 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220648 kB' 'Inactive: 729840 kB' 'Active(anon): 95136 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94532 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 7780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7915492 kB' 'MemUsed: 4385660 kB' 'Active: 2220648 kB' 'Inactive: 729840 kB' 'Active(anon): 95136 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2855564 kB' 'Mapped: 25228 kB' 'AnonPages: 94240 kB' 'Shmem: 16892 kB' 'KernelStack: 3616 kB' 'PageTables: 7780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'AnonHugePages: 4096 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.754 node0=512 expecting 512 00:03:55.754 ************************************ 00:03:55.754 END TEST per_node_1G_alloc 00:03:55.754 ************************************ 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.754 00:03:55.754 real 0m0.259s 00:03:55.754 user 0m0.140s 00:03:55.754 sys 0m0.143s 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.754 23:19:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.754 23:19:16 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:55.754 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.754 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.754 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.754 ************************************ 00:03:55.754 START TEST even_2G_alloc 00:03:55.754 ************************************ 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.754 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.755 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.755 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:55.755 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.755 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865052 kB' 'MemAvailable: 9527508 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221624 kB' 'Inactive: 729840 kB' 'Active(anon): 96112 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94044 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 8072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 4096 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=4096 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865572 kB' 'MemAvailable: 9528028 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221624 kB' 'Inactive: 729840 kB' 'Active(anon): 96112 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94044 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 8072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.757 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865832 kB' 'MemAvailable: 9528288 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221624 kB' 'Inactive: 729840 kB' 'Active(anon): 96112 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93656 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 8072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.759 nr_hugepages=1024 00:03:55.759 resv_hugepages=0 00:03:55.759 surplus_hugepages=0 00:03:55.759 anon_hugepages=4096 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=4096 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865564 kB' 'MemAvailable: 9528020 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221624 kB' 'Inactive: 729840 kB' 'Active(anon): 96112 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 93656 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 8072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.759 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865556 kB' 'MemUsed: 5435596 kB' 'Active: 2221364 kB' 'Inactive: 729840 kB' 'Active(anon): 95852 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2855564 kB' 'Mapped: 25228 kB' 'AnonPages: 94044 kB' 'Shmem: 16892 kB' 'KernelStack: 3616 kB' 'PageTables: 8072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'AnonHugePages: 4096 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.760 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.761 node0=1024 expecting 1024 00:03:55.761 ************************************ 00:03:55.761 END TEST even_2G_alloc 00:03:55.761 ************************************ 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.761 00:03:55.761 real 0m0.266s 00:03:55.761 user 0m0.146s 00:03:55.761 sys 0m0.145s 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.761 23:19:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.761 23:19:16 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:55.761 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.761 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.761 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.761 ************************************ 00:03:55.761 START TEST odd_alloc 00:03:55.761 ************************************ 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.761 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.761 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:55.761 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.761 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865208 kB' 'MemAvailable: 9527664 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220648 kB' 'Inactive: 729840 kB' 'Active(anon): 95136 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94532 kB' 'Mapped: 25812 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 8168 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100976 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.762 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 4096 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=4096 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865468 kB' 'MemAvailable: 9527924 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220648 kB' 'Inactive: 729840 kB' 'Active(anon): 95136 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94532 kB' 'Mapped: 25812 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 8168 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100976 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.763 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865076 kB' 'MemAvailable: 9527532 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220388 kB' 'Inactive: 729840 kB' 'Active(anon): 94876 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94532 kB' 'Mapped: 25812 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 8168 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100976 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.764 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.765 nr_hugepages=1025 00:03:55.765 resv_hugepages=0 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.765 surplus_hugepages=0 00:03:55.765 anon_hugepages=4096 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=4096 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865032 kB' 'MemAvailable: 9527488 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220648 kB' 'Inactive: 729840 kB' 'Active(anon): 95136 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94532 kB' 'Mapped: 25812 kB' 'Shmem: 16892 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'KernelStack: 3616 kB' 'PageTables: 8168 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100976 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.765 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.766 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6865292 kB' 'MemUsed: 5435860 kB' 'Active: 2220648 kB' 'Inactive: 729840 kB' 'Active(anon): 95136 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713156 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2855564 kB' 'Mapped: 25812 kB' 'AnonPages: 94920 kB' 'Shmem: 16892 kB' 'KernelStack: 3616 kB' 'PageTables: 7780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171504 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48688 kB' 'AnonHugePages: 4096 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.767 node0=1025 expecting 1025 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:55.767 00:03:55.767 real 0m0.283s 00:03:55.767 user 0m0.149s 00:03:55.767 sys 0m0.161s 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.767 23:19:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.767 ************************************ 00:03:55.767 END TEST odd_alloc 00:03:55.767 ************************************ 00:03:55.767 23:19:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:55.767 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.767 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.767 23:19:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.767 ************************************ 00:03:55.767 START TEST custom_alloc 00:03:55.767 ************************************ 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:55.767 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.768 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:55.768 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.768 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7915496 kB' 'MemAvailable: 10577956 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220672 kB' 'Inactive: 729844 kB' 'Active(anon): 95160 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94324 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.768 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 4096 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=4096 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7915692 kB' 'MemAvailable: 10578152 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220932 kB' 'Inactive: 729844 kB' 'Active(anon): 95420 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94712 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.769 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7915952 kB' 'MemAvailable: 10578412 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220932 kB' 'Inactive: 729844 kB' 'Active(anon): 95420 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94324 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.770 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.771 nr_hugepages=512 00:03:55.771 resv_hugepages=0 00:03:55.771 surplus_hugepages=0 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.771 anon_hugepages=4096 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=4096 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7915896 kB' 'MemAvailable: 10578356 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220672 kB' 'Inactive: 729844 kB' 'Active(anon): 95160 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94616 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626288 kB' 'Committed_AS: 353372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.771 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 7916744 kB' 'MemUsed: 4384408 kB' 'Active: 2220412 kB' 'Inactive: 729844 kB' 'Active(anon): 94900 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2855564 kB' 'Mapped: 25248 kB' 'AnonPages: 94032 kB' 'Shmem: 16892 kB' 'KernelStack: 3696 kB' 'PageTables: 8356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'AnonHugePages: 4096 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.772 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.773 node0=512 expecting 512 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.773 00:03:55.773 real 0m0.251s 00:03:55.773 user 0m0.145s 00:03:55.773 sys 0m0.133s 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.773 23:19:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.773 ************************************ 00:03:55.773 END TEST custom_alloc 00:03:55.773 ************************************ 00:03:55.773 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:55.773 23:19:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.773 23:19:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.773 23:19:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.773 ************************************ 00:03:55.773 START TEST no_shrink_alloc 00:03:55.773 ************************************ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.773 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:55.773 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.773 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867424 kB' 'MemAvailable: 9529884 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220932 kB' 'Inactive: 729844 kB' 'Active(anon): 95420 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95004 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 7968 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.773 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 4096 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=4096 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867636 kB' 'MemAvailable: 9530096 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220932 kB' 'Inactive: 729844 kB' 'Active(anon): 95420 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95392 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 7968 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.774 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867896 kB' 'MemAvailable: 9530356 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2220932 kB' 'Inactive: 729844 kB' 'Active(anon): 95420 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95004 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 7968 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.775 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.776 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.777 nr_hugepages=1024 00:03:55.777 resv_hugepages=0 00:03:55.777 surplus_hugepages=0 00:03:55.777 anon_hugepages=4096 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=4096 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867540 kB' 'MemAvailable: 9530000 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221192 kB' 'Inactive: 729844 kB' 'Active(anon): 95680 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 95004 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.777 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.778 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867800 kB' 'MemUsed: 5433352 kB' 'Active: 2221192 kB' 'Inactive: 729844 kB' 'Active(anon): 95680 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2855564 kB' 'Mapped: 25248 kB' 'AnonPages: 95004 kB' 'Shmem: 16892 kB' 'KernelStack: 3696 kB' 'PageTables: 8356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'AnonHugePages: 4096 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.779 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.780 node0=1024 expecting 1024 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.780 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:55.780 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.780 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:55.780 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867612 kB' 'MemAvailable: 9530072 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221192 kB' 'Inactive: 729844 kB' 'Active(anon): 95680 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94712 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8064 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.780 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 4096 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=4096 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867612 kB' 'MemAvailable: 9530072 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221192 kB' 'Inactive: 729844 kB' 'Active(anon): 95680 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94712 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8064 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.781 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.782 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867516 kB' 'MemAvailable: 9529976 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221192 kB' 'Inactive: 729844 kB' 'Active(anon): 95680 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94712 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8064 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.783 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.784 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.785 nr_hugepages=1024 00:03:55.785 resv_hugepages=0 00:03:55.785 surplus_hugepages=0 00:03:55.785 anon_hugepages=4096 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=4096 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867776 kB' 'MemAvailable: 9530236 kB' 'Buffers: 2068 kB' 'Cached: 2853496 kB' 'SwapCached: 0 kB' 'Active: 2221192 kB' 'Inactive: 729844 kB' 'Active(anon): 95680 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'AnonPages: 94712 kB' 'Mapped: 25248 kB' 'Shmem: 16892 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'KernelStack: 3696 kB' 'PageTables: 8064 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5102000 kB' 'Committed_AS: 350424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690040 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 4096 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 100204 kB' 'DirectMap2M: 4093952 kB' 'DirectMap1G: 10485760 kB' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.785 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301152 kB' 'MemFree: 6867488 kB' 'MemUsed: 5433664 kB' 'Active: 2221192 kB' 'Inactive: 729844 kB' 'Active(anon): 95680 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2125512 kB' 'Inactive(file): 713160 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 16 kB' 'Writeback: 0 kB' 'FilePages: 2855564 kB' 'Mapped: 25248 kB' 'AnonPages: 94712 kB' 'Shmem: 16892 kB' 'KernelStack: 3696 kB' 'PageTables: 8064 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171704 kB' 'SReclaimable: 122816 kB' 'SUnreclaim: 48888 kB' 'AnonHugePages: 4096 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.786 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.787 node0=1024 expecting 1024 00:03:55.787 ************************************ 00:03:55.787 END TEST no_shrink_alloc 00:03:55.787 ************************************ 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.787 00:03:55.787 real 0m0.522s 00:03:55.787 user 0m0.260s 00:03:55.787 sys 0m0.314s 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.787 23:19:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.787 23:19:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.787 00:03:55.787 real 0m2.435s 00:03:55.787 user 0m1.182s 00:03:55.787 sys 0m1.375s 00:03:55.787 ************************************ 00:03:55.787 END TEST hugepages 00:03:55.787 ************************************ 00:03:55.787 23:19:17 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.787 23:19:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.787 23:19:17 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:55.787 23:19:17 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.787 23:19:17 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.787 23:19:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.787 ************************************ 00:03:55.787 START TEST driver 00:03:55.787 ************************************ 00:03:55.787 23:19:17 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:55.787 * Looking for test storage... 00:03:55.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:55.787 23:19:17 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:55.787 23:19:17 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.787 23:19:17 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.787 23:19:18 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:55.787 23:19:18 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.788 23:19:18 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.788 23:19:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:55.788 ************************************ 00:03:55.788 START TEST guess_driver 00:03:55.788 ************************************ 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:55.788 insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:55.788 Looking for driver=uio_pci_generic 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.788 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.788 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.788 ************************************ 00:03:55.788 END TEST guess_driver 00:03:55.788 ************************************ 00:03:55.788 00:03:55.788 real 0m0.667s 00:03:55.788 user 0m0.250s 00:03:55.788 sys 0m0.395s 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.788 23:19:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:55.788 ************************************ 00:03:55.788 END TEST driver 00:03:55.788 ************************************ 00:03:55.788 00:03:55.788 real 0m1.074s 00:03:55.788 user 0m0.390s 00:03:55.788 sys 0m0.656s 00:03:55.788 23:19:18 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.788 23:19:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:55.788 23:19:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:55.788 23:19:18 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.788 23:19:18 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.788 23:19:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.788 ************************************ 00:03:55.788 START TEST devices 00:03:55.788 ************************************ 00:03:55.788 23:19:18 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:55.788 * Looking for test storage... 00:03:55.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:55.788 23:19:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:55.788 23:19:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:55.788 23:19:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.788 23:19:18 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:56.046 23:19:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:56.046 23:19:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:56.046 23:19:19 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:56.046 23:19:19 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:56.046 23:19:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:56.046 23:19:19 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:56.046 23:19:19 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.046 23:19:19 setup.sh.devices -- common/autotest_common.sh@1660 -- # return 1 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:56.046 23:19:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:56.046 23:19:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:56.046 No valid GPT data, bailing 00:03:56.046 23:19:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.046 23:19:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:56.046 23:19:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:56.046 23:19:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:56.046 23:19:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:56.046 23:19:19 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:56.046 23:19:19 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:56.047 23:19:19 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:56.047 23:19:19 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:56.047 23:19:19 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.047 23:19:19 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.047 23:19:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:56.047 ************************************ 00:03:56.047 START TEST nvme_mount 00:03:56.047 ************************************ 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.047 23:19:19 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:57.420 Creating new GPT entries. 00:03:57.420 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.420 other utilities. 00:03:57.420 23:19:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.420 23:19:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.420 23:19:20 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.420 23:19:20 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.420 23:19:20 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:58.355 Creating new GPT entries. 00:03:58.355 The operation has completed successfully. 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 40867 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:03:58.355 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.356 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:58.356 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.615 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:58.615 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.615 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:58.615 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:58.615 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:58.615 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:58.615 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.873 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:58.873 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.874 23:19:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.874 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.874 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.874 23:19:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.874 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.131 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:59.131 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.389 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.389 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.389 23:19:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.389 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.389 ************************************ 00:03:59.389 END TEST nvme_mount 00:03:59.389 ************************************ 00:03:59.389 00:03:59.389 real 0m3.195s 00:03:59.389 user 0m0.393s 00:03:59.389 sys 0m0.647s 00:03:59.389 23:19:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.389 23:19:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.389 23:19:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:59.389 23:19:22 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:59.389 23:19:22 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:59.389 23:19:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.389 ************************************ 00:03:59.389 START TEST dm_mount 00:03:59.389 ************************************ 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.389 23:19:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:00.325 Creating new GPT entries. 00:04:00.325 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.325 other utilities. 00:04:00.325 23:19:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.325 23:19:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.325 23:19:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.325 23:19:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.325 23:19:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:01.258 Creating new GPT entries. 00:04:01.258 The operation has completed successfully. 00:04:01.258 23:19:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:01.258 23:19:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.258 23:19:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.258 23:19:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.258 23:19:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:02.633 The operation has completed successfully. 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 41189 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.633 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:02.633 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.634 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:02.634 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:02.634 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:02.634 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.892 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.892 23:19:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.892 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:02.892 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:02.892 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.892 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:02.892 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.892 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:02.892 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:02.892 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:02.892 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.150 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:03.150 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:03.150 ************************************ 00:04:03.150 END TEST dm_mount 00:04:03.150 ************************************ 00:04:03.150 00:04:03.150 real 0m3.798s 00:04:03.150 user 0m0.264s 00:04:03.150 sys 0m0.455s 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:03.150 23:19:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.150 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:03.150 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:03.150 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:03.150 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.150 23:19:26 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:03.150 00:04:03.150 real 0m7.549s 00:04:03.150 user 0m0.917s 00:04:03.150 sys 0m1.385s 00:04:03.150 ************************************ 00:04:03.150 END TEST devices 00:04:03.150 23:19:26 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:03.150 23:19:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.150 ************************************ 00:04:03.150 00:04:03.150 real 0m13.269s 00:04:03.150 user 0m3.521s 00:04:03.150 sys 0m4.631s 00:04:03.150 23:19:26 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:03.150 ************************************ 00:04:03.150 END TEST setup.sh 00:04:03.150 ************************************ 00:04:03.150 23:19:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.150 23:19:26 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.409 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:03.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:03.409 Hugepages 00:04:03.409 node hugesize free / total 00:04:03.409 node0 1048576kB 0 / 0 00:04:03.409 node0 2048kB 2048 / 2048 00:04:03.409 00:04:03.409 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.409 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.409 NVMe 0000:00:10.0 1b36 0010 0 nvme nvme0 nvme0n1 00:04:03.409 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:03.409 23:19:26 -- spdk/autotest.sh@130 -- # uname -s 00:04:03.409 23:19:26 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:03.409 23:19:26 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:03.409 23:19:26 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.666 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:03.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:03.933 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.933 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:03.933 23:19:27 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:04.865 23:19:28 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:04.865 23:19:28 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:04.865 23:19:28 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.865 23:19:28 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:04.865 23:19:28 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:04.865 23:19:28 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:04.865 23:19:28 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.865 23:19:28 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.865 23:19:28 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:05.254 23:19:28 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:05.254 23:19:28 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.254 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:05.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:05.254 Waiting for block devices as requested 00:04:05.254 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.254 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:05.254 23:19:28 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:05.254 23:19:28 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:04:05.254 23:19:28 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:04:05.254 23:19:28 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:05.254 23:19:28 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:05.254 23:19:28 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:05.254 23:19:28 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:04:05.254 23:19:28 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:05.254 23:19:28 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:05.254 23:19:28 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:05.254 23:19:28 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:05.254 23:19:28 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:05.254 23:19:28 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:05.254 23:19:28 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:05.254 23:19:28 -- common/autotest_common.sh@1553 -- # continue 00:04:05.254 23:19:28 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:05.254 23:19:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.254 23:19:28 -- common/autotest_common.sh@10 -- # set +x 00:04:05.254 23:19:28 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:05.254 23:19:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:05.254 23:19:28 -- common/autotest_common.sh@10 -- # set +x 00:04:05.254 23:19:28 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.254 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:05.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:04:05.768 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.768 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:04:05.768 23:19:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:05.768 23:19:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.768 23:19:28 -- common/autotest_common.sh@10 -- # set +x 00:04:05.768 23:19:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:05.768 23:19:28 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:05.768 23:19:28 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.768 23:19:28 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:05.768 23:19:28 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:05.768 23:19:28 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:05.768 23:19:28 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:05.768 23:19:28 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:05.768 23:19:28 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.768 23:19:28 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.768 23:19:28 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:05.768 23:19:28 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:05.768 23:19:28 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:04:05.768 23:19:28 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:05.768 23:19:28 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:05.768 23:19:28 -- common/autotest_common.sh@1576 -- # device=0x0010 00:04:05.768 23:19:28 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.768 23:19:28 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:04:05.768 23:19:28 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:04:05.768 23:19:28 -- common/autotest_common.sh@1589 -- # return 0 00:04:05.768 23:19:28 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:04:05.768 23:19:28 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:05.768 23:19:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:05.768 23:19:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.768 23:19:28 -- common/autotest_common.sh@10 -- # set +x 00:04:05.768 ************************************ 00:04:05.768 START TEST unittest 00:04:05.768 ************************************ 00:04:05.768 23:19:29 unittest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:05.768 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:05.768 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:05.768 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:05.768 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:05.768 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:05.768 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:05.768 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:05.768 ++ rpc_py=rpc_cmd 00:04:05.768 ++ set -e 00:04:05.768 ++ shopt -s nullglob 00:04:05.768 ++ shopt -s extglob 00:04:05.768 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:05.768 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:05.768 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:05.768 +++ CONFIG_RDMA=y 00:04:05.768 +++ CONFIG_UNIT_TESTS=y 00:04:05.768 +++ CONFIG_GOLANG=n 00:04:05.768 +++ CONFIG_FUSE=n 00:04:05.768 +++ CONFIG_ISAL=n 00:04:05.768 +++ CONFIG_VTUNE_DIR= 00:04:05.768 +++ CONFIG_CUSTOMOCF=n 00:04:05.768 +++ CONFIG_IPSEC_MB_DIR= 00:04:05.768 +++ CONFIG_VBDEV_COMPRESS=n 00:04:05.768 +++ CONFIG_OCF_PATH= 00:04:05.768 +++ CONFIG_SHARED=n 00:04:05.768 +++ CONFIG_DPDK_LIB_DIR= 00:04:05.768 +++ CONFIG_PGO_DIR= 00:04:05.768 +++ CONFIG_TESTS=y 00:04:05.768 +++ CONFIG_APPS=y 00:04:05.768 +++ CONFIG_ISAL_CRYPTO=n 00:04:05.768 +++ CONFIG_LIBDIR= 00:04:05.768 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:05.768 +++ CONFIG_DAOS_DIR= 00:04:05.768 +++ CONFIG_ISCSI_INITIATOR=n 00:04:05.768 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:05.768 +++ CONFIG_ASAN=y 00:04:05.768 +++ CONFIG_LTO=n 00:04:05.768 +++ CONFIG_CET=n 00:04:05.768 +++ CONFIG_FUZZER=n 00:04:05.768 +++ CONFIG_USDT=n 00:04:05.768 +++ CONFIG_VTUNE=n 00:04:05.768 +++ CONFIG_VHOST=y 00:04:05.768 +++ CONFIG_WPDK_DIR= 00:04:05.768 +++ CONFIG_UBLK=n 00:04:05.768 +++ CONFIG_URING=n 00:04:05.768 +++ CONFIG_SMA=n 00:04:05.768 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:05.769 +++ CONFIG_IDXD_KERNEL=n 00:04:05.769 +++ CONFIG_FC_PATH= 00:04:05.769 +++ CONFIG_PREFIX=/usr/local 00:04:05.769 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:04:05.769 +++ CONFIG_XNVME=n 00:04:05.769 +++ CONFIG_RDMA_PROV=verbs 00:04:05.769 +++ CONFIG_RDMA_SET_TOS=y 00:04:05.769 +++ CONFIG_FUZZER_LIB= 00:04:05.769 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:05.769 +++ CONFIG_ARCH=native 00:04:05.769 +++ CONFIG_PGO_CAPTURE=n 00:04:05.769 +++ CONFIG_DAOS=y 00:04:05.769 +++ CONFIG_WERROR=y 00:04:05.769 +++ CONFIG_DEBUG=y 00:04:05.769 +++ CONFIG_AVAHI=n 00:04:05.769 +++ CONFIG_CROSS_PREFIX= 00:04:05.769 +++ CONFIG_HAVE_KEYUTILS=n 00:04:05.769 +++ CONFIG_PGO_USE=n 00:04:05.769 +++ CONFIG_CRYPTO=n 00:04:05.769 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:05.769 +++ CONFIG_OPENSSL_PATH= 00:04:05.769 +++ CONFIG_EXAMPLES=y 00:04:05.769 +++ CONFIG_DPDK_INC_DIR= 00:04:05.769 +++ CONFIG_HAVE_EVP_MAC=n 00:04:05.769 +++ CONFIG_MAX_LCORES= 00:04:05.769 +++ CONFIG_VIRTIO=y 00:04:05.769 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:05.769 +++ CONFIG_IPSEC_MB=n 00:04:05.769 +++ CONFIG_UBSAN=n 00:04:05.769 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:05.769 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:05.769 +++ CONFIG_HAVE_LIBBSD=n 00:04:05.769 +++ CONFIG_URING_PATH= 00:04:05.769 +++ CONFIG_NVME_CUSE=y 00:04:05.769 +++ CONFIG_URING_ZNS=n 00:04:05.769 +++ CONFIG_VFIO_USER=n 00:04:05.769 +++ CONFIG_FC=n 00:04:05.769 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:04:05.769 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:05.769 +++ CONFIG_RBD=n 00:04:05.769 +++ CONFIG_RAID5F=n 00:04:05.769 +++ CONFIG_VFIO_USER_DIR= 00:04:05.769 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:05.769 +++ CONFIG_TSAN=n 00:04:05.769 +++ CONFIG_IDXD=y 00:04:05.769 +++ CONFIG_DPDK_UADK=n 00:04:05.769 +++ CONFIG_OCF=n 00:04:05.769 +++ CONFIG_CRYPTO_MLX5=n 00:04:05.769 +++ CONFIG_FIO_PLUGIN=y 00:04:05.769 +++ CONFIG_COVERAGE=y 00:04:05.769 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:05.769 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:05.769 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:05.769 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:05.769 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:05.769 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:05.769 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:05.769 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:05.769 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:05.769 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:05.769 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:05.769 +++ VHOST_APP=("$_app_dir/vhost") 00:04:05.769 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:05.769 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:05.769 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:05.769 +++ [[ #ifndef SPDK_CONFIG_H 00:04:05.769 #define SPDK_CONFIG_H 00:04:05.769 #define SPDK_CONFIG_APPS 1 00:04:05.769 #define SPDK_CONFIG_ARCH native 00:04:05.769 #define SPDK_CONFIG_ASAN 1 00:04:05.769 #undef SPDK_CONFIG_AVAHI 00:04:05.769 #undef SPDK_CONFIG_CET 00:04:05.769 #define SPDK_CONFIG_COVERAGE 1 00:04:05.769 #define SPDK_CONFIG_CROSS_PREFIX 00:04:05.769 #undef SPDK_CONFIG_CRYPTO 00:04:05.769 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:05.769 #undef SPDK_CONFIG_CUSTOMOCF 00:04:05.769 #define SPDK_CONFIG_DAOS 1 00:04:05.769 #define SPDK_CONFIG_DAOS_DIR 00:04:05.769 #define SPDK_CONFIG_DEBUG 1 00:04:05.769 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:05.769 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:05.769 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:05.769 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:05.769 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:05.769 #undef SPDK_CONFIG_DPDK_UADK 00:04:05.769 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:05.769 #define SPDK_CONFIG_EXAMPLES 1 00:04:05.769 #undef SPDK_CONFIG_FC 00:04:05.769 #define SPDK_CONFIG_FC_PATH 00:04:05.769 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:05.769 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:05.769 #undef SPDK_CONFIG_FUSE 00:04:05.769 #undef SPDK_CONFIG_FUZZER 00:04:05.769 #define SPDK_CONFIG_FUZZER_LIB 00:04:05.769 #undef SPDK_CONFIG_GOLANG 00:04:05.769 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:05.769 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:04:05.769 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:05.769 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:04:05.769 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:05.769 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:05.769 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:04:05.769 #define SPDK_CONFIG_IDXD 1 00:04:05.769 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:05.769 #undef SPDK_CONFIG_IPSEC_MB 00:04:05.769 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:05.769 #undef SPDK_CONFIG_ISAL 00:04:05.769 #undef SPDK_CONFIG_ISAL_CRYPTO 00:04:05.769 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:04:05.769 #define SPDK_CONFIG_LIBDIR 00:04:05.769 #undef SPDK_CONFIG_LTO 00:04:05.769 #define SPDK_CONFIG_MAX_LCORES 00:04:05.769 #define SPDK_CONFIG_NVME_CUSE 1 00:04:05.769 #undef SPDK_CONFIG_OCF 00:04:05.769 #define SPDK_CONFIG_OCF_PATH 00:04:05.769 #define SPDK_CONFIG_OPENSSL_PATH 00:04:05.769 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:05.769 #define SPDK_CONFIG_PGO_DIR 00:04:05.769 #undef SPDK_CONFIG_PGO_USE 00:04:05.769 #define SPDK_CONFIG_PREFIX /usr/local 00:04:05.769 #undef SPDK_CONFIG_RAID5F 00:04:05.769 #undef SPDK_CONFIG_RBD 00:04:05.769 #define SPDK_CONFIG_RDMA 1 00:04:05.769 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:05.769 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:05.769 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:04:05.769 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:05.769 #undef SPDK_CONFIG_SHARED 00:04:05.769 #undef SPDK_CONFIG_SMA 00:04:05.769 #define SPDK_CONFIG_TESTS 1 00:04:05.769 #undef SPDK_CONFIG_TSAN 00:04:05.769 #undef SPDK_CONFIG_UBLK 00:04:05.769 #undef SPDK_CONFIG_UBSAN 00:04:05.769 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:05.769 #undef SPDK_CONFIG_URING 00:04:05.769 #define SPDK_CONFIG_URING_PATH 00:04:05.769 #undef SPDK_CONFIG_URING_ZNS 00:04:05.769 #undef SPDK_CONFIG_USDT 00:04:05.769 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:05.769 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:05.769 #undef SPDK_CONFIG_VFIO_USER 00:04:05.769 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:05.769 #define SPDK_CONFIG_VHOST 1 00:04:05.769 #define SPDK_CONFIG_VIRTIO 1 00:04:05.769 #undef SPDK_CONFIG_VTUNE 00:04:05.769 #define SPDK_CONFIG_VTUNE_DIR 00:04:05.769 #define SPDK_CONFIG_WERROR 1 00:04:05.769 #define SPDK_CONFIG_WPDK_DIR 00:04:05.769 #undef SPDK_CONFIG_XNVME 00:04:05.769 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:05.769 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:05.769 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:05.769 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:05.769 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.769 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.769 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:05.769 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:05.769 ++++ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:05.769 ++++ export PATH 00:04:05.769 ++++ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:05.769 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:05.769 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:05.769 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:05.769 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:05.769 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:05.769 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:05.769 +++ TEST_TAG=N/A 00:04:05.769 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:05.769 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:04:05.769 ++++ uname -s 00:04:05.769 +++ PM_OS=Linux 00:04:05.769 +++ MONITOR_RESOURCES_SUDO=() 00:04:05.769 +++ declare -A MONITOR_RESOURCES_SUDO 00:04:05.769 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:04:05.769 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:04:05.769 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:04:05.769 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:04:05.769 +++ SUDO[0]= 00:04:05.769 +++ SUDO[1]='sudo -E' 00:04:05.769 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:04:05.769 +++ [[ Linux == FreeBSD ]] 00:04:05.769 +++ [[ Linux == Linux ]] 00:04:05.769 +++ [[ QEMU != QEMU ]] 00:04:05.769 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:04:05.769 ++ : 0 00:04:05.769 ++ export RUN_NIGHTLY 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_RUN_VALGRIND 00:04:05.769 ++ : 1 00:04:05.769 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:05.769 ++ : 1 00:04:05.769 ++ export SPDK_TEST_UNITTEST 00:04:05.769 ++ : 00:04:05.769 ++ export SPDK_TEST_AUTOBUILD 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_RELEASE_BUILD 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_ISAL 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_ISCSI 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_NVME 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_NVME_PMR 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_NVME_BP 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_NVME_CLI 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_NVME_CUSE 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_NVME_FDP 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_NVMF 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_VFIOUSER 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_FUZZER 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_FUZZER_SHORT 00:04:05.769 ++ : rdma 00:04:05.769 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_RBD 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_VHOST 00:04:05.769 ++ : 1 00:04:05.769 ++ export SPDK_TEST_BLOCKDEV 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_IOAT 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_BLOBFS 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_VHOST_INIT 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_LVOL 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:05.769 ++ : 1 00:04:05.769 ++ export SPDK_RUN_ASAN 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_RUN_UBSAN 00:04:05.769 ++ : 00:04:05.769 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_RUN_NON_ROOT 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_CRYPTO 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_FTL 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_OCF 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_VMD 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_OPAL 00:04:05.769 ++ : 00:04:05.769 ++ export SPDK_TEST_NATIVE_DPDK 00:04:05.769 ++ : true 00:04:05.769 ++ export SPDK_AUTOTEST_X 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_RAID5 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_URING 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_USDT 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_USE_IGB_UIO 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_SCHEDULER 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_SCANBUILD 00:04:05.769 ++ : 00:04:05.769 ++ export SPDK_TEST_NVMF_NICS 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_SMA 00:04:05.769 ++ : 1 00:04:05.769 ++ export SPDK_TEST_DAOS 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_XNVME 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_ACCEL_DSA 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_ACCEL_IAA 00:04:05.769 ++ : 00:04:05.769 ++ export SPDK_TEST_FUZZER_TARGET 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_TEST_NVMF_MDNS 00:04:05.769 ++ : 0 00:04:05.769 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:05.769 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:05.769 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:05.769 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:05.769 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:05.769 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:05.769 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:05.769 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:05.769 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:05.769 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:05.769 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:05.769 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:05.769 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:05.769 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:05.769 ++ PYTHONDONTWRITEBYTECODE=1 00:04:05.769 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:05.769 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:05.769 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:05.769 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:05.769 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:05.769 ++ rm -rf /var/tmp/asan_suppression_file 00:04:05.769 ++ cat 00:04:06.027 ++ echo leak:libfuse3.so 00:04:06.027 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:06.027 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:06.027 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:06.027 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:06.027 ++ '[' -z /var/spdk/dependencies ']' 00:04:06.027 ++ export DEPENDENCY_DIR 00:04:06.027 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:06.027 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:06.027 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:06.027 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:06.027 ++ export QEMU_BIN= 00:04:06.027 ++ QEMU_BIN= 00:04:06.027 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:06.028 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:06.028 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:06.028 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:06.028 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:06.028 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:06.028 ++ '[' 0 -eq 0 ']' 00:04:06.028 ++ export valgrind= 00:04:06.028 ++ valgrind= 00:04:06.028 +++ uname -s 00:04:06.028 ++ '[' Linux = Linux ']' 00:04:06.028 ++ HUGEMEM=4096 00:04:06.028 ++ export CLEAR_HUGE=yes 00:04:06.028 ++ CLEAR_HUGE=yes 00:04:06.028 ++ [[ 0 -eq 1 ]] 00:04:06.028 ++ [[ 0 -eq 1 ]] 00:04:06.028 ++ MAKE=make 00:04:06.028 +++ nproc 00:04:06.028 ++ MAKEFLAGS=-j10 00:04:06.028 ++ export HUGEMEM=4096 00:04:06.028 ++ HUGEMEM=4096 00:04:06.028 ++ NO_HUGE=() 00:04:06.028 ++ TEST_MODE= 00:04:06.028 ++ [[ -z '' ]] 00:04:06.028 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:06.028 ++ exec 00:04:06.028 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:06.028 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:06.028 ++ set_test_storage 2147483648 00:04:06.028 ++ [[ -v testdir ]] 00:04:06.028 ++ local requested_size=2147483648 00:04:06.028 ++ local mount target_dir 00:04:06.028 ++ local -A mounts fss sizes avails uses 00:04:06.028 ++ local source fs size avail mount use 00:04:06.028 ++ local storage_fallback storage_candidates 00:04:06.028 +++ mktemp -udt spdk.XXXXXX 00:04:06.028 ++ storage_fallback=/tmp/spdk.HEo0fg 00:04:06.028 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:06.028 ++ [[ -n '' ]] 00:04:06.028 ++ [[ -n '' ]] 00:04:06.028 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.HEo0fg/tests/unit /tmp/spdk.HEo0fg 00:04:06.028 ++ requested_size=2214592512 00:04:06.028 ++ read -r source fs size use avail _ mount 00:04:06.028 +++ df -T 00:04:06.028 +++ grep -v Filesystem 00:04:06.028 ++ mounts["$mount"]=devtmpfs 00:04:06.028 ++ fss["$mount"]=devtmpfs 00:04:06.028 ++ avails["$mount"]=6267637760 00:04:06.028 ++ sizes["$mount"]=6267637760 00:04:06.028 ++ uses["$mount"]=0 00:04:06.028 ++ read -r source fs size use avail _ mount 00:04:06.028 ++ mounts["$mount"]=tmpfs 00:04:06.028 ++ fss["$mount"]=tmpfs 00:04:06.028 ++ avails["$mount"]=6298189824 00:04:06.028 ++ sizes["$mount"]=6298189824 00:04:06.028 ++ uses["$mount"]=0 00:04:06.028 ++ read -r source fs size use avail _ mount 00:04:06.028 ++ mounts["$mount"]=tmpfs 00:04:06.028 ++ fss["$mount"]=tmpfs 00:04:06.028 ++ avails["$mount"]=6280888320 00:04:06.028 ++ sizes["$mount"]=6298189824 00:04:06.028 ++ uses["$mount"]=17301504 00:04:06.028 ++ read -r source fs size use avail _ mount 00:04:06.028 ++ mounts["$mount"]=tmpfs 00:04:06.028 ++ fss["$mount"]=tmpfs 00:04:06.028 ++ avails["$mount"]=6298189824 00:04:06.028 ++ sizes["$mount"]=6298189824 00:04:06.028 ++ uses["$mount"]=0 00:04:06.028 ++ read -r source fs size use avail _ mount 00:04:06.028 ++ mounts["$mount"]=/dev/vda1 00:04:06.028 ++ fss["$mount"]=xfs 00:04:06.028 ++ avails["$mount"]=14339645440 00:04:06.028 ++ sizes["$mount"]=21463302144 00:04:06.028 ++ uses["$mount"]=7123656704 00:04:06.028 ++ read -r source fs size use avail _ mount 00:04:06.028 ++ mounts["$mount"]=tmpfs 00:04:06.028 ++ fss["$mount"]=tmpfs 00:04:06.028 ++ avails["$mount"]=1259638784 00:04:06.028 ++ sizes["$mount"]=1259638784 00:04:06.028 ++ uses["$mount"]=0 00:04:06.028 ++ read -r source fs size use avail _ mount 00:04:06.028 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:04:06.028 ++ fss["$mount"]=fuse.sshfs 00:04:06.028 ++ avails["$mount"]=93621067776 00:04:06.028 ++ sizes["$mount"]=105088212992 00:04:06.028 ++ uses["$mount"]=6081712128 00:04:06.028 ++ read -r source fs size use avail _ mount 00:04:06.028 ++ printf '* Looking for test storage...\n' 00:04:06.028 * Looking for test storage... 00:04:06.028 ++ local target_space new_size 00:04:06.028 ++ for target_dir in "${storage_candidates[@]}" 00:04:06.028 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:06.028 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:06.028 ++ mount=/ 00:04:06.028 ++ target_space=14339645440 00:04:06.028 ++ (( target_space == 0 || target_space < requested_size )) 00:04:06.028 ++ (( target_space >= requested_size )) 00:04:06.028 ++ [[ xfs == tmpfs ]] 00:04:06.028 ++ [[ xfs == ramfs ]] 00:04:06.028 ++ [[ / == / ]] 00:04:06.028 ++ new_size=9338249216 00:04:06.028 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:06.028 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:06.028 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:06.028 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:06.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:06.028 ++ return 0 00:04:06.028 ++ set -o errtrace 00:04:06.028 ++ shopt -s extdebug 00:04:06.028 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:06.028 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:06.028 23:19:29 unittest -- common/autotest_common.sh@1683 -- # true 00:04:06.028 23:19:29 unittest -- common/autotest_common.sh@1685 -- # xtrace_fd 00:04:06.028 23:19:29 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:06.028 23:19:29 unittest -- common/autotest_common.sh@29 -- # exec 00:04:06.028 23:19:29 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:06.028 23:19:29 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:06.028 23:19:29 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:06.028 23:19:29 unittest -- common/autotest_common.sh@18 -- # set -x 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@179 -- # hash lcov 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@180 -- # cov_avail=yes 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:04:06.028 --rc lcov_branch_coverage=1 00:04:06.028 --rc lcov_function_coverage=1 00:04:06.028 --rc genhtml_branch_coverage=1 00:04:06.028 --rc genhtml_function_coverage=1 00:04:06.028 --rc genhtml_legend=1 00:04:06.028 --rc geninfo_all_blocks=1 00:04:06.028 ' 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:04:06.028 --rc lcov_branch_coverage=1 00:04:06.028 --rc lcov_function_coverage=1 00:04:06.028 --rc genhtml_branch_coverage=1 00:04:06.028 --rc genhtml_function_coverage=1 00:04:06.028 --rc genhtml_legend=1 00:04:06.028 --rc geninfo_all_blocks=1 00:04:06.028 ' 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:04:06.028 --rc lcov_branch_coverage=1 00:04:06.028 --rc lcov_function_coverage=1 00:04:06.028 --rc genhtml_branch_coverage=1 00:04:06.028 --rc genhtml_function_coverage=1 00:04:06.028 --rc genhtml_legend=1 00:04:06.028 --rc geninfo_all_blocks=1 00:04:06.028 --no-external' 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@200 -- # LCOV='lcov 00:04:06.028 --rc lcov_branch_coverage=1 00:04:06.028 --rc lcov_function_coverage=1 00:04:06.028 --rc genhtml_branch_coverage=1 00:04:06.028 --rc genhtml_function_coverage=1 00:04:06.028 --rc genhtml_legend=1 00:04:06.028 --rc geninfo_all_blocks=1 00:04:06.028 --no-external' 00:04:06.028 23:19:29 unittest -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:04:14.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:14.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:14.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:14.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:14.139 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:14.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:32.262 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:32.262 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:32.262 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:32.262 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:32.262 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:32.262 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:32.262 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:32.262 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:32.262 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:32.262 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:32.262 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:32.262 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:32.262 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:32.262 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:32.263 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:32.263 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:32.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:32.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:19.028 23:20:40 unittest -- unit/unittest.sh@206 -- # uname -m 00:05:19.028 23:20:40 unittest -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:19.028 23:20:40 unittest -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:19.028 ************************************ 00:05:19.028 START TEST unittest_pci_event 00:05:19.028 ************************************ 00:05:19.028 23:20:40 unittest.unittest_pci_event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:19.028 00:05:19.028 00:05:19.028 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.028 http://cunit.sourceforge.net/ 00:05:19.028 00:05:19.028 00:05:19.028 Suite: pci_event 00:05:19.028 Test: test_pci_parse_event ...passed 00:05:19.028 00:05:19.028 [2024-05-14 23:20:40.071934] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:19.028 [2024-05-14 23:20:40.072250] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:19.028 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.028 suites 1 1 n/a 0 0 00:05:19.028 tests 1 1 1 0 0 00:05:19.028 asserts 15 15 15 0 n/a 00:05:19.028 00:05:19.028 Elapsed time = 0.000 seconds 00:05:19.028 ************************************ 00:05:19.028 END TEST unittest_pci_event 00:05:19.028 ************************************ 00:05:19.028 00:05:19.028 real 0m0.030s 00:05:19.028 user 0m0.013s 00:05:19.028 sys 0m0.016s 00:05:19.028 23:20:40 unittest.unittest_pci_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.028 23:20:40 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:05:19.028 23:20:40 unittest -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:19.028 ************************************ 00:05:19.028 START TEST unittest_include 00:05:19.028 ************************************ 00:05:19.028 23:20:40 unittest.unittest_include -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:19.028 00:05:19.028 00:05:19.028 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.028 http://cunit.sourceforge.net/ 00:05:19.028 00:05:19.028 00:05:19.028 Suite: histogram 00:05:19.028 Test: histogram_test ...passed 00:05:19.028 Test: histogram_merge ...passed 00:05:19.028 00:05:19.028 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.028 suites 1 1 n/a 0 0 00:05:19.028 tests 2 2 2 0 0 00:05:19.028 asserts 50 50 50 0 n/a 00:05:19.028 00:05:19.028 Elapsed time = 0.000 seconds 00:05:19.028 ************************************ 00:05:19.028 END TEST unittest_include 00:05:19.028 ************************************ 00:05:19.028 00:05:19.028 real 0m0.027s 00:05:19.028 user 0m0.016s 00:05:19.028 sys 0m0.011s 00:05:19.028 23:20:40 unittest.unittest_include -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.028 23:20:40 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:05:19.028 23:20:40 unittest -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.028 23:20:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:19.028 ************************************ 00:05:19.028 START TEST unittest_bdev 00:05:19.028 ************************************ 00:05:19.028 23:20:40 unittest.unittest_bdev -- common/autotest_common.sh@1121 -- # unittest_bdev 00:05:19.028 23:20:40 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:19.028 00:05:19.028 00:05:19.028 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.028 http://cunit.sourceforge.net/ 00:05:19.028 00:05:19.028 00:05:19.028 Suite: bdev 00:05:19.028 Test: bytes_to_blocks_test ...passed 00:05:19.028 Test: num_blocks_test ...passed 00:05:19.028 Test: io_valid_test ...passed 00:05:19.028 Test: open_write_test ...[2024-05-14 23:20:40.284311] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:19.028 [2024-05-14 23:20:40.284556] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:19.028 [2024-05-14 23:20:40.284626] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:19.028 passed 00:05:19.028 Test: claim_test ...passed 00:05:19.028 Test: alias_add_del_test ...[2024-05-14 23:20:40.384321] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:19.028 [2024-05-14 23:20:40.384444] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4605:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:19.028 [2024-05-14 23:20:40.384496] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:19.028 passed 00:05:19.028 Test: get_device_stat_test ...passed 00:05:19.028 Test: bdev_io_types_test ...passed 00:05:19.028 Test: bdev_io_wait_test ...passed 00:05:19.028 Test: bdev_io_spans_split_test ...passed 00:05:19.028 Test: bdev_io_boundary_split_test ...passed 00:05:19.028 Test: bdev_io_max_size_and_segment_split_test ...[2024-05-14 23:20:40.566427] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:19.028 passed 00:05:19.028 Test: bdev_io_mix_split_test ...passed 00:05:19.028 Test: bdev_io_split_with_io_wait ...passed 00:05:19.028 Test: bdev_io_write_unit_split_test ...[2024-05-14 23:20:40.739575] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:19.028 [2024-05-14 23:20:40.739686] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:19.028 [2024-05-14 23:20:40.739718] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:19.028 [2024-05-14 23:20:40.739779] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:19.028 passed 00:05:19.028 Test: bdev_io_alignment_with_boundary ...passed 00:05:19.028 Test: bdev_io_alignment ...passed 00:05:19.028 Test: bdev_histograms ...passed 00:05:19.028 Test: bdev_write_zeroes ...passed 00:05:19.028 Test: bdev_compare_and_write ...passed 00:05:19.028 Test: bdev_compare ...passed 00:05:19.028 Test: bdev_compare_emulated ...passed 00:05:19.028 Test: bdev_zcopy_write ...passed 00:05:19.028 Test: bdev_zcopy_read ...passed 00:05:19.028 Test: bdev_open_while_hotremove ...passed 00:05:19.028 Test: bdev_close_while_hotremove ...passed 00:05:19.028 Test: bdev_open_ext_test ...passed 00:05:19.028 Test: bdev_open_ext_unregister ...[2024-05-14 23:20:41.290926] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:19.028 [2024-05-14 23:20:41.291116] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:19.028 passed 00:05:19.028 Test: bdev_set_io_timeout ...passed 00:05:19.028 Test: bdev_set_qd_sampling ...passed 00:05:19.028 Test: lba_range_overlap ...passed 00:05:19.028 Test: lock_lba_range_check_ranges ...passed 00:05:19.028 Test: lock_lba_range_with_io_outstanding ...passed 00:05:19.028 Test: lock_lba_range_overlapped ...passed 00:05:19.028 Test: bdev_quiesce ...[2024-05-14 23:20:41.532793] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10059:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:19.028 passed 00:05:19.028 Test: bdev_io_abort ...passed 00:05:19.028 Test: bdev_unmap ...passed 00:05:19.028 Test: bdev_write_zeroes_split_test ...passed 00:05:19.028 Test: bdev_set_options_test ...passed 00:05:19.028 Test: bdev_get_memory_domains ...passed 00:05:19.028 Test: bdev_io_ext ...[2024-05-14 23:20:41.692351] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:19.028 passed 00:05:19.028 Test: bdev_io_ext_no_opts ...passed 00:05:19.028 Test: bdev_io_ext_invalid_opts ...passed 00:05:19.028 Test: bdev_io_ext_split ...passed 00:05:19.028 Test: bdev_io_ext_bounce_buffer ...passed 00:05:19.028 Test: bdev_register_uuid_alias ...[2024-05-14 23:20:41.951713] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 77fff304-6153-4ebc-8ab0-3ecc923338aa already exists 00:05:19.028 [2024-05-14 23:20:41.951806] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:77fff304-6153-4ebc-8ab0-3ecc923338aa alias for bdev bdev0 00:05:19.029 passed 00:05:19.029 Test: bdev_unregister_by_name ...passed 00:05:19.029 Test: for_each_bdev_test ...[2024-05-14 23:20:41.978988] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7926:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:19.029 [2024-05-14 23:20:41.979064] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7934:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:19.029 passed 00:05:19.029 Test: bdev_seek_test ...passed 00:05:19.029 Test: bdev_copy ...passed 00:05:19.029 Test: bdev_copy_split_test ...passed 00:05:19.029 Test: examine_locks ...passed 00:05:19.029 Test: claim_v2_rwo ...passed 00:05:19.029 Test: claim_v2_rom ...[2024-05-14 23:20:42.113742] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.113815] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8660:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.113837] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.113895] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.113916] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.113977] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8655:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:19.029 [2024-05-14 23:20:42.114136] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114206] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114238] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114271] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114304] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:19.029 [2024-05-14 23:20:42.114352] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8693:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:19.029 passed 00:05:19.029 Test: claim_v2_rwm ...passed 00:05:19.029 Test: claim_v2_existing_writer ...passed 00:05:19.029 Test: claim_v2_existing_v1 ...passed 00:05:19.029 Test: claim_v1_existing_v2 ...passed 00:05:19.029 Test: examine_claimed ...passed 00:05:19.029 00:05:19.029 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.029 suites 1 1 n/a 0 0 00:05:19.029 tests 59 59 59 0 0 00:05:19.029 asserts 4599 4599 4599 0 n/a 00:05:19.029 00:05:19.029 Elapsed time = 1.890 seconds 00:05:19.029 [2024-05-14 23:20:42.114461] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8728:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:19.029 [2024-05-14 23:20:42.114520] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114586] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114621] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114641] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114668] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8748:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114700] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8728:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:19.029 [2024-05-14 23:20:42.114829] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8693:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:19.029 [2024-05-14 23:20:42.114861] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8693:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:19.029 [2024-05-14 23:20:42.114964] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.114997] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.115017] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.115107] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.115165] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.115200] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.029 [2024-05-14 23:20:42.115419] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:19.029 23:20:42 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:19.029 00:05:19.029 00:05:19.029 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.029 http://cunit.sourceforge.net/ 00:05:19.029 00:05:19.029 00:05:19.029 Suite: nvme 00:05:19.029 Test: test_create_ctrlr ...passed 00:05:19.029 Test: test_reset_ctrlr ...passed 00:05:19.029 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:19.029 Test: test_failover_ctrlr ...passed 00:05:19.029 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:05:19.029 Test: test_pending_reset ...[2024-05-14 23:20:42.159431] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 [2024-05-14 23:20:42.160663] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 [2024-05-14 23:20:42.160827] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 [2024-05-14 23:20:42.160976] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 [2024-05-14 23:20:42.162047] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 passed 00:05:19.029 Test: test_attach_ctrlr ...passed 00:05:19.029 Test: test_aer_cb ...[2024-05-14 23:20:42.162379] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 [2024-05-14 23:20:42.163014] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:19.029 passed 00:05:19.029 Test: test_submit_nvme_cmd ...passed 00:05:19.029 Test: test_add_remove_trid ...passed 00:05:19.029 Test: test_abort ...passed 00:05:19.029 Test: test_get_io_qpair ...passed 00:05:19.029 Test: test_bdev_unregister ...passed 00:05:19.029 Test: test_compare_ns ...passed 00:05:19.029 Test: test_init_ana_log_page ...passed 00:05:19.029 Test: test_get_memory_domains ...passed 00:05:19.029 Test: test_reconnect_qpair ...passed 00:05:19.029 Test: test_create_bdev_ctrlr ...passed 00:05:19.029 Test: test_add_multi_ns_to_bdev ...[2024-05-14 23:20:42.164892] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7436:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:19.029 [2024-05-14 23:20:42.166485] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 [2024-05-14 23:20:42.166845] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5362:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:19.029 passed 00:05:19.029 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:19.029 Test: test_admin_path ...passed 00:05:19.029 Test: test_reset_bdev_ctrlr ...passed 00:05:19.029 Test: test_find_io_path ...passed 00:05:19.029 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:19.029 Test: test_retry_io_for_io_path_error ...passed 00:05:19.029 Test: test_retry_io_count ...passed 00:05:19.029 Test: test_concurrent_read_ana_log_page ...passed 00:05:19.029 Test: test_retry_io_for_ana_error ...[2024-05-14 23:20:42.167624] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4553:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:19.029 passed 00:05:19.029 Test: test_check_io_error_resiliency_params ...[2024-05-14 23:20:42.171430] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6056:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:19.029 [2024-05-14 23:20:42.171512] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6060:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:19.029 passed 00:05:19.029 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:19.029 Test: test_reconnect_ctrlr ...[2024-05-14 23:20:42.171568] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6069:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:19.029 [2024-05-14 23:20:42.171659] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6072:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:19.029 [2024-05-14 23:20:42.171707] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:19.029 [2024-05-14 23:20:42.171795] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:19.029 [2024-05-14 23:20:42.171855] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6064:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:19.029 [2024-05-14 23:20:42.171944] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6079:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:19.029 [2024-05-14 23:20:42.172011] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:19.029 [2024-05-14 23:20:42.172722] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 [2024-05-14 23:20:42.172903] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.029 passed 00:05:19.029 Test: test_retry_failover_ctrlr ...passed 00:05:19.030 Test: test_fail_path ...[2024-05-14 23:20:42.173458] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.173592] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.173681] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.173978] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.174350] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.174473] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 passed 00:05:19.030 Test: test_nvme_ns_cmp ...passed 00:05:19.030 Test: test_ana_transition ...passed 00:05:19.030 Test: test_set_preferred_path ...passed 00:05:19.030 Test: test_find_next_io_path ...passed 00:05:19.030 Test: test_find_io_path_min_qd ...passed 00:05:19.030 Test: test_disable_auto_failback ...passed 00:05:19.030 Test: test_set_multipath_policy ...passed 00:05:19.030 Test: test_uuid_generation ...passed 00:05:19.030 Test: test_retry_io_to_same_path ...passed 00:05:19.030 Test: test_race_between_reset_and_disconnected ...passed 00:05:19.030 Test: test_ctrlr_op_rpc ...passed 00:05:19.030 Test: test_bdev_ctrlr_op_rpc ...[2024-05-14 23:20:42.174607] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.174727] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.174894] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.176681] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 passed 00:05:19.030 Test: test_disable_enable_ctrlr ...passed 00:05:19.030 Test: test_delete_ctrlr_done ...passed 00:05:19.030 Test: test_ns_remove_during_reset ...[2024-05-14 23:20:42.180396] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 [2024-05-14 23:20:42.180566] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.030 passed 00:05:19.030 00:05:19.030 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.030 suites 1 1 n/a 0 0 00:05:19.030 tests 48 48 48 0 0 00:05:19.030 asserts 3565 3565 3565 0 n/a 00:05:19.030 00:05:19.030 Elapsed time = 0.030 seconds 00:05:19.030 23:20:42 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:19.030 00:05:19.030 00:05:19.030 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.030 http://cunit.sourceforge.net/ 00:05:19.030 00:05:19.030 Test Options 00:05:19.030 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:05:19.030 00:05:19.030 Suite: raid 00:05:19.030 Test: test_create_raid ...passed 00:05:19.030 Test: test_create_raid_superblock ...passed 00:05:19.030 Test: test_delete_raid ...passed 00:05:19.030 Test: test_create_raid_invalid_args ...[2024-05-14 23:20:42.214545] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:19.030 [2024-05-14 23:20:42.214941] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:19.030 [2024-05-14 23:20:42.215334] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:19.030 [2024-05-14 23:20:42.215534] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:19.030 [2024-05-14 23:20:42.215654] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:19.030 [2024-05-14 23:20:42.216527] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:19.030 [2024-05-14 23:20:42.216591] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:19.030 passed 00:05:19.030 Test: test_delete_raid_invalid_args ...passed 00:05:19.030 Test: test_io_channel ...passed 00:05:19.030 Test: test_reset_io ...passed 00:05:19.030 Test: test_write_io ...passed 00:05:19.030 Test: test_read_io ...passed 00:05:20.403 Test: test_unmap_io ...passed 00:05:20.403 Test: test_io_failure ...passed 00:05:20.403 Test: test_multi_raid_no_io ...passed 00:05:20.403 Test: test_multi_raid_with_io ...[2024-05-14 23:20:43.296609] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 961:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:20.403 passed 00:05:20.403 Test: test_io_type_supported ...passed 00:05:20.403 Test: test_raid_json_dump_info ...passed 00:05:20.403 Test: test_context_size ...passed 00:05:20.403 Test: test_raid_level_conversions ...passed 00:05:20.403 Test: test_raid_io_split ...passed 00:05:20.403 Test: test_raid_process ...passedTest Options 00:05:20.403 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:05:20.403 00:05:20.403 Suite: raid_dif 00:05:20.403 Test: test_create_raid ...passed 00:05:20.403 Test: test_create_raid_superblock ...passed 00:05:20.403 Test: test_delete_raid ...passed 00:05:20.403 Test: test_create_raid_invalid_args ...[2024-05-14 23:20:43.315040] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:20.403 [2024-05-14 23:20:43.315236] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:20.403 [2024-05-14 23:20:43.315524] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:20.403 [2024-05-14 23:20:43.315609] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:20.403 [2024-05-14 23:20:43.315630] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:20.403 [2024-05-14 23:20:43.316405] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3117:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:20.403 [2024-05-14 23:20:43.316435] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3295:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:20.403 passed 00:05:20.403 Test: test_delete_raid_invalid_args ...passed 00:05:20.403 Test: test_io_channel ...passed 00:05:20.403 Test: test_reset_io ...passed 00:05:20.403 Test: test_write_io ...passed 00:05:20.403 Test: test_read_io ...passed 00:05:21.339 Test: test_unmap_io ...passed 00:05:21.339 Test: test_io_failure ...[2024-05-14 23:20:44.365364] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 961:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:21.339 passed 00:05:21.339 Test: test_multi_raid_no_io ...passed 00:05:21.339 Test: test_multi_raid_with_io ...passed 00:05:21.339 Test: test_io_type_supported ...passed 00:05:21.339 Test: test_raid_json_dump_info ...passed 00:05:21.339 Test: test_context_size ...passed 00:05:21.339 Test: test_raid_level_conversions ...passed 00:05:21.339 Test: test_raid_io_split ...passed 00:05:21.339 Test: test_raid_process ...passed 00:05:21.339 00:05:21.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.339 suites 2 2 n/a 0 0 00:05:21.339 tests 38 38 38 0 0 00:05:21.339 asserts 355741 355741 355741 0 n/a 00:05:21.339 00:05:21.339 Elapsed time = 2.180 seconds 00:05:21.339 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:21.339 00:05:21.339 00:05:21.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.339 http://cunit.sourceforge.net/ 00:05:21.339 00:05:21.339 00:05:21.339 Suite: raid_sb 00:05:21.339 Test: test_raid_bdev_write_superblock ...passed 00:05:21.339 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:21.339 Test: test_raid_bdev_parse_superblock ...passed 00:05:21.339 Suite: raid_sb_md 00:05:21.339 Test: test_raid_bdev_write_superblock ...passed 00:05:21.339 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:21.339 Test: test_raid_bdev_parse_superblock ...passed 00:05:21.339 Suite: raid_sb_md_interleaved 00:05:21.339 Test: test_raid_bdev_write_superblock ...passed 00:05:21.339 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:21.339 Test: test_raid_bdev_parse_superblock ...passed 00:05:21.339 00:05:21.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.339 suites 3 3 n/a 0 0 00:05:21.339 tests 9 9 9 0 0 00:05:21.339 asserts 139 139 139 0 n/a 00:05:21.339 00:05:21.339 Elapsed time = 0.000 seconds 00:05:21.339 [2024-05-14 23:20:44.434476] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:21.339 [2024-05-14 23:20:44.434846] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:21.339 [2024-05-14 23:20:44.435076] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:21.339 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:21.339 00:05:21.339 00:05:21.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.339 http://cunit.sourceforge.net/ 00:05:21.339 00:05:21.339 00:05:21.339 Suite: concat 00:05:21.339 Test: test_concat_start ...passed 00:05:21.339 Test: test_concat_rw ...passed 00:05:21.339 Test: test_concat_null_payload ...passed 00:05:21.339 00:05:21.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.339 suites 1 1 n/a 0 0 00:05:21.339 tests 3 3 3 0 0 00:05:21.339 asserts 8460 8460 8460 0 n/a 00:05:21.339 00:05:21.339 Elapsed time = 0.010 seconds 00:05:21.339 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:21.339 00:05:21.339 00:05:21.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.339 http://cunit.sourceforge.net/ 00:05:21.339 00:05:21.339 00:05:21.339 Suite: raid1 00:05:21.339 Test: test_raid1_start ...passed 00:05:21.339 Test: test_raid1_read_balancing ...passed 00:05:21.339 00:05:21.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.339 suites 1 1 n/a 0 0 00:05:21.339 tests 2 2 2 0 0 00:05:21.339 asserts 2880 2880 2880 0 n/a 00:05:21.339 00:05:21.339 Elapsed time = 0.000 seconds 00:05:21.339 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:21.339 00:05:21.339 00:05:21.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.339 http://cunit.sourceforge.net/ 00:05:21.339 00:05:21.339 00:05:21.339 Suite: zone 00:05:21.339 Test: test_zone_get_operation ...passed 00:05:21.339 Test: test_bdev_zone_get_info ...passed 00:05:21.339 Test: test_bdev_zone_management ...passed 00:05:21.339 Test: test_bdev_zone_append ...passed 00:05:21.339 Test: test_bdev_zone_append_with_md ...passed 00:05:21.339 Test: test_bdev_zone_appendv ...passed 00:05:21.339 Test: test_bdev_zone_appendv_with_md ...passed 00:05:21.339 Test: test_bdev_io_get_append_location ...passed 00:05:21.339 00:05:21.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.339 suites 1 1 n/a 0 0 00:05:21.339 tests 8 8 8 0 0 00:05:21.339 asserts 94 94 94 0 n/a 00:05:21.339 00:05:21.339 Elapsed time = 0.000 seconds 00:05:21.339 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:21.339 00:05:21.339 00:05:21.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.339 http://cunit.sourceforge.net/ 00:05:21.339 00:05:21.339 00:05:21.339 Suite: gpt_parse 00:05:21.339 Test: test_parse_mbr_and_primary ...[2024-05-14 23:20:44.516125] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:21.339 [2024-05-14 23:20:44.516663] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:21.339 [2024-05-14 23:20:44.516761] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:21.339 [2024-05-14 23:20:44.516904] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:21.339 [2024-05-14 23:20:44.516969] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:21.339 [2024-05-14 23:20:44.517082] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:21.339 passed 00:05:21.339 Test: test_parse_secondary ...[2024-05-14 23:20:44.518049] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:21.339 [2024-05-14 23:20:44.518128] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:21.339 [2024-05-14 23:20:44.518201] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:21.339 [2024-05-14 23:20:44.518258] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:21.339 passed 00:05:21.339 Test: test_check_mbr ...passed 00:05:21.339 Test: test_read_header ...[2024-05-14 23:20:44.519223] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:21.339 [2024-05-14 23:20:44.519287] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:21.339 [2024-05-14 23:20:44.519378] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:21.339 [2024-05-14 23:20:44.519528] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:21.339 passed 00:05:21.339 Test: test_read_partitions ...[2024-05-14 23:20:44.519679] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:21.339 [2024-05-14 23:20:44.519748] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:21.339 [2024-05-14 23:20:44.519795] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:21.339 [2024-05-14 23:20:44.519846] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:21.339 [2024-05-14 23:20:44.519912] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:21.339 [2024-05-14 23:20:44.520002] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:21.339 [2024-05-14 23:20:44.520058] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:21.339 [2024-05-14 23:20:44.520104] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:21.339 [2024-05-14 23:20:44.520441] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:21.339 passed 00:05:21.339 00:05:21.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.339 suites 1 1 n/a 0 0 00:05:21.339 tests 5 5 5 0 0 00:05:21.339 asserts 33 33 33 0 n/a 00:05:21.339 00:05:21.339 Elapsed time = 0.010 seconds 00:05:21.339 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:21.339 00:05:21.339 00:05:21.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.339 http://cunit.sourceforge.net/ 00:05:21.339 00:05:21.339 00:05:21.339 Suite: bdev_part 00:05:21.339 Test: part_test ...passed 00:05:21.339 Test: part_free_test ...passed 00:05:21.339 Test: part_get_io_channel_test ...[2024-05-14 23:20:44.548104] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:21.339 passed 00:05:21.339 Test: part_construct_ext ...passed 00:05:21.339 00:05:21.340 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.340 suites 1 1 n/a 0 0 00:05:21.340 tests 4 4 4 0 0 00:05:21.340 asserts 48 48 48 0 n/a 00:05:21.340 00:05:21.340 Elapsed time = 0.060 seconds 00:05:21.340 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:21.599 00:05:21.599 00:05:21.599 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.599 http://cunit.sourceforge.net/ 00:05:21.599 00:05:21.599 00:05:21.599 Suite: scsi_nvme_suite 00:05:21.599 Test: scsi_nvme_translate_test ...passed 00:05:21.599 00:05:21.599 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.599 suites 1 1 n/a 0 0 00:05:21.599 tests 1 1 1 0 0 00:05:21.599 asserts 104 104 104 0 n/a 00:05:21.599 00:05:21.599 Elapsed time = 0.000 seconds 00:05:21.599 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:21.599 00:05:21.599 00:05:21.599 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.599 http://cunit.sourceforge.net/ 00:05:21.599 00:05:21.599 00:05:21.599 Suite: lvol 00:05:21.599 Test: ut_lvs_init ...[2024-05-14 23:20:44.644624] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:21.599 passed 00:05:21.599 Test: ut_lvol_init ...passed 00:05:21.599 Test: ut_lvol_snapshot ...passed 00:05:21.599 Test: ut_lvol_clone ...[2024-05-14 23:20:44.644937] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:21.599 passed 00:05:21.599 Test: ut_lvs_destroy ...passed 00:05:21.599 Test: ut_lvs_unload ...passed 00:05:21.599 Test: ut_lvol_resize ...passed 00:05:21.599 Test: ut_lvol_set_read_only ...passed 00:05:21.599 Test: ut_lvol_hotremove ...passed 00:05:21.599 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:21.599 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:21.599 Test: ut_lvol_read_write ...[2024-05-14 23:20:44.645522] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:21.599 passed 00:05:21.599 Test: ut_vbdev_lvol_submit_request ...passed 00:05:21.599 Test: ut_lvol_examine_config ...passed 00:05:21.599 Test: ut_lvol_examine_disk ...[2024-05-14 23:20:44.645910] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:21.599 passed 00:05:21.599 Test: ut_lvol_rename ...passed 00:05:21.599 Test: ut_bdev_finish ...passed 00:05:21.599 Test: ut_lvs_rename ...passed 00:05:21.599 Test: ut_lvol_seek ...passed 00:05:21.599 Test: ut_esnap_dev_create ...passed 00:05:21.599 Test: ut_lvol_esnap_clone_bad_args ...passed[2024-05-14 23:20:44.646438] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:21.599 [2024-05-14 23:20:44.646530] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:21.599 [2024-05-14 23:20:44.646842] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:21.599 [2024-05-14 23:20:44.646899] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:21.599 [2024-05-14 23:20:44.646932] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:21.599 [2024-05-14 23:20:44.646976] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1911:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:21.599 [2024-05-14 23:20:44.647080] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:21.599 [2024-05-14 23:20:44.647113] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:21.599 00:05:21.599 Test: ut_lvol_shallow_copy ...[2024-05-14 23:20:44.647267] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:21.599 passed 00:05:21.599 00:05:21.599 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.599 suites 1 1 n/a 0 0 00:05:21.599 tests 22 22 22 0 0 00:05:21.599 asserts 793 793 793 0 n/a 00:05:21.599 00:05:21.599 Elapsed time = 0.010 seconds 00:05:21.599 [2024-05-14 23:20:44.647310] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:05:21.599 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:21.599 00:05:21.599 00:05:21.599 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.599 http://cunit.sourceforge.net/ 00:05:21.599 00:05:21.599 00:05:21.599 Suite: zone_block 00:05:21.599 Test: test_zone_block_create ...passed 00:05:21.599 Test: test_zone_block_create_invalid ...[2024-05-14 23:20:44.684764] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:21.599 [2024-05-14 23:20:44.685011] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-14 23:20:44.685119] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:21.599 [2024-05-14 23:20:44.685191] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File existspassed 00:05:21.599 Test: test_get_zone_info ...[2024-05-14 23:20:44.685253] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:21.599 [2024-05-14 23:20:44.685300] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-14 23:20:44.685346] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:21.599 [2024-05-14 23:20:44.685385] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:21.599 Test: test_supported_io_types ...passed 00:05:21.599 Test: test_reset_zone ...passed 00:05:21.599 Test: test_open_zone ...passed 00:05:21.599 Test: test_zone_write ...[2024-05-14 23:20:44.685691] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.685741] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.685780] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.686185] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.686225] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.686437] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.686887] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.686929] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.687229] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:21.599 [2024-05-14 23:20:44.687264] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.687312] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:21.599 [2024-05-14 23:20:44.687350] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.692158] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:21.599 [2024-05-14 23:20:44.692204] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.692259] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:21.599 [2024-05-14 23:20:44.692298] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 passed 00:05:21.599 Test: test_zone_read ...[2024-05-14 23:20:44.697352] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:21.599 [2024-05-14 23:20:44.697405] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.697668] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:21.599 [2024-05-14 23:20:44.697699] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.697755] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:21.599 [2024-05-14 23:20:44.697781] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 passed 00:05:21.599 Test: test_close_zone ...[2024-05-14 23:20:44.698046] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:21.599 [2024-05-14 23:20:44.698074] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.698301] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.599 [2024-05-14 23:20:44.698350] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.600 [2024-05-14 23:20:44.698439] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.600 [2024-05-14 23:20:44.698471] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.600 passed 00:05:21.600 Test: test_finish_zone ...[2024-05-14 23:20:44.698813] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.600 [2024-05-14 23:20:44.698845] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.600 passed 00:05:21.600 Test: test_append_zone ...[2024-05-14 23:20:44.699035] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:21.600 [2024-05-14 23:20:44.699061] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.600 [2024-05-14 23:20:44.699105] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:21.600 [2024-05-14 23:20:44.699128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.600 passed 00:05:21.600 00:05:21.600 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.600 suites 1 1 n/a 0 0 00:05:21.600 tests 11 11 11 0 0 00:05:21.600 asserts 3437 3437 3437 0 n/a 00:05:21.600 00:05:21.600 Elapsed time = 0.020 seconds 00:05:21.600 [2024-05-14 23:20:44.709321] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:21.600 [2024-05-14 23:20:44.709363] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:21.600 23:20:44 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:21.600 00:05:21.600 00:05:21.600 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.600 http://cunit.sourceforge.net/ 00:05:21.600 00:05:21.600 00:05:21.600 Suite: bdev 00:05:21.600 Test: basic ...[2024-05-14 23:20:44.793252] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51e201): Operation not permitted (rc=-1) 00:05:21.600 [2024-05-14 23:20:44.793468] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x51e1c0): Operation not permitted (rc=-1) 00:05:21.600 [2024-05-14 23:20:44.793513] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51e201): Operation not permitted (rc=-1) 00:05:21.600 passed 00:05:21.600 Test: unregister_and_close ...passed 00:05:21.858 Test: unregister_and_close_different_threads ...passed 00:05:21.858 Test: basic_qos ...passed 00:05:21.858 Test: put_channel_during_reset ...passed 00:05:21.858 Test: aborted_reset ...passed 00:05:21.858 Test: aborted_reset_no_outstanding_io ...passed 00:05:21.858 Test: io_during_reset ...passed 00:05:22.117 Test: reset_completions ...passed 00:05:22.117 Test: io_during_qos_queue ...passed 00:05:22.117 Test: io_during_qos_reset ...passed 00:05:22.117 Test: enomem ...passed 00:05:22.117 Test: enomem_multi_bdev ...passed 00:05:22.376 Test: enomem_multi_bdev_unregister ...passed 00:05:22.376 Test: enomem_multi_io_target ...passed 00:05:22.376 Test: qos_dynamic_enable ...passed 00:05:22.376 Test: bdev_histograms_mt ...passed 00:05:22.635 Test: bdev_set_io_timeout_mt ...[2024-05-14 23:20:45.691070] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:22.635 passed 00:05:22.635 Test: lock_lba_range_then_submit_io ...[2024-05-14 23:20:45.713096] thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x51e180 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:22.635 passed 00:05:22.635 Test: unregister_during_reset ...passed 00:05:22.635 Test: event_notify_and_close ...passed 00:05:22.635 Suite: bdev_wrong_thread 00:05:22.635 Test: spdk_bdev_register_wt ...passed 00:05:22.635 Test: spdk_bdev_examine_wt ...[2024-05-14 23:20:45.838422] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8454:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:05:22.635 [2024-05-14 23:20:45.838996] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:05:22.635 passed 00:05:22.635 00:05:22.635 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.635 suites 2 2 n/a 0 0 00:05:22.635 tests 23 23 23 0 0 00:05:22.635 asserts 601 601 601 0 n/a 00:05:22.635 00:05:22.635 Elapsed time = 1.060 seconds 00:05:22.635 ************************************ 00:05:22.635 END TEST unittest_bdev 00:05:22.635 ************************************ 00:05:22.635 00:05:22.635 real 0m5.673s 00:05:22.635 user 0m2.252s 00:05:22.635 sys 0m3.412s 00:05:22.635 23:20:45 unittest.unittest_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.635 23:20:45 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:22.635 23:20:45 unittest -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:22.635 23:20:45 unittest -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:22.635 23:20:45 unittest -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:22.635 23:20:45 unittest -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:22.635 23:20:45 unittest -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:05:22.635 23:20:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.635 23:20:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.635 23:20:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:22.894 ************************************ 00:05:22.894 START TEST unittest_blob_blobfs 00:05:22.894 ************************************ 00:05:22.894 23:20:45 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1121 -- # unittest_blob 00:05:22.894 23:20:45 unittest.unittest_blob_blobfs -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:05:22.894 23:20:45 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:05:22.894 00:05:22.894 00:05:22.894 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.894 http://cunit.sourceforge.net/ 00:05:22.894 00:05:22.894 00:05:22.894 Suite: blob_nocopy_noextent 00:05:22.894 Test: blob_init ...[2024-05-14 23:20:45.947078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5463:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:22.894 passed 00:05:22.894 Test: blob_thin_provision ...passed 00:05:22.894 Test: blob_read_only ...passed 00:05:22.894 Test: bs_load ...[2024-05-14 23:20:46.011755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 938:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:22.894 passed 00:05:22.894 Test: bs_load_custom_cluster_size ...passed 00:05:22.894 Test: bs_load_after_failed_grow ...passed 00:05:22.894 Test: bs_cluster_sz ...[2024-05-14 23:20:46.045227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:22.895 [2024-05-14 23:20:46.045714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5594:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:22.895 [2024-05-14 23:20:46.045929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3856:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:22.895 passed 00:05:22.895 Test: bs_resize_md ...passed 00:05:22.895 Test: bs_destroy ...passed 00:05:22.895 Test: bs_type ...passed 00:05:22.895 Test: bs_super_block ...passed 00:05:22.895 Test: bs_test_recover_cluster_count ...passed 00:05:22.895 Test: bs_grow_live ...passed 00:05:22.895 Test: bs_grow_live_no_space ...passed 00:05:22.895 Test: bs_test_grow ...passed 00:05:22.895 Test: blob_serialize_test ...passed 00:05:22.895 Test: super_block_crc ...passed 00:05:23.154 Test: blob_thin_prov_write_count_io ...passed 00:05:23.154 Test: blob_thin_prov_unmap_cluster ...passed 00:05:23.154 Test: bs_load_iter_test ...passed 00:05:23.154 Test: blob_relations ...[2024-05-14 23:20:46.233394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.154 [2024-05-14 23:20:46.233559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.154 passed 00:05:23.154 Test: blob_relations2 ...[2024-05-14 23:20:46.234934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.154 [2024-05-14 23:20:46.235030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.154 [2024-05-14 23:20:46.250310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.154 [2024-05-14 23:20:46.250452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.154 [2024-05-14 23:20:46.250521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.154 [2024-05-14 23:20:46.250585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.154 [2024-05-14 23:20:46.252285] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.154 [2024-05-14 23:20:46.252345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.154 [2024-05-14 23:20:46.252725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:23.154 [2024-05-14 23:20:46.252780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.154 passed 00:05:23.154 Test: blob_relations3 ...passed 00:05:23.154 Test: blobstore_clean_power_failure ...passed 00:05:23.154 Test: blob_delete_snapshot_power_failure ...[2024-05-14 23:20:46.411657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:23.154 [2024-05-14 23:20:46.423692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:23.154 [2024-05-14 23:20:46.423827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:23.154 [2024-05-14 23:20:46.423885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.154 [2024-05-14 23:20:46.440912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:23.154 [2024-05-14 23:20:46.441034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:23.154 [2024-05-14 23:20:46.441073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:23.413 [2024-05-14 23:20:46.441139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.413 [2024-05-14 23:20:46.454259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:23.413 [2024-05-14 23:20:46.454402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.413 [2024-05-14 23:20:46.467315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:23.413 [2024-05-14 23:20:46.467451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.413 [2024-05-14 23:20:46.480455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:23.413 [2024-05-14 23:20:46.480575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:23.413 passed 00:05:23.413 Test: blob_create_snapshot_power_failure ...[2024-05-14 23:20:46.522740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:23.413 [2024-05-14 23:20:46.547958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:23.413 [2024-05-14 23:20:46.563434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:23.413 passed 00:05:23.413 Test: blob_io_unit ...passed 00:05:23.413 Test: blob_io_unit_compatibility ...passed 00:05:23.413 Test: blob_ext_md_pages ...passed 00:05:23.413 Test: blob_esnap_io_4096_4096 ...passed 00:05:23.671 Test: blob_esnap_io_512_512 ...passed 00:05:23.671 Test: blob_esnap_io_4096_512 ...passed 00:05:23.671 Test: blob_esnap_io_512_4096 ...passed 00:05:23.671 Test: blob_esnap_clone_resize ...passed 00:05:23.671 Suite: blob_bs_nocopy_noextent 00:05:23.671 Test: blob_open ...passed 00:05:23.672 Test: blob_create ...[2024-05-14 23:20:46.838500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:23.672 passed 00:05:23.672 Test: blob_create_loop ...passed 00:05:23.672 Test: blob_create_fail ...[2024-05-14 23:20:46.941850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:23.672 passed 00:05:23.930 Test: blob_create_internal ...passed 00:05:23.930 Test: blob_create_zero_extent ...passed 00:05:23.930 Test: blob_snapshot ...passed 00:05:23.930 Test: blob_clone ...passed 00:05:23.930 Test: blob_inflate ...[2024-05-14 23:20:47.127775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:23.930 passed 00:05:23.930 Test: blob_delete ...passed 00:05:23.930 Test: blob_resize_test ...[2024-05-14 23:20:47.186014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:23.930 passed 00:05:24.188 Test: blob_resize_thin_test ...passed 00:05:24.188 Test: channel_ops ...passed 00:05:24.188 Test: blob_super ...passed 00:05:24.188 Test: blob_rw_verify_iov ...passed 00:05:24.188 Test: blob_unmap ...passed 00:05:24.188 Test: blob_iter ...passed 00:05:24.188 Test: blob_parse_md ...passed 00:05:24.188 Test: bs_load_pending_removal ...passed 00:05:24.188 Test: bs_unload ...[2024-05-14 23:20:47.462282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:24.188 passed 00:05:24.447 Test: bs_usable_clusters ...passed 00:05:24.447 Test: blob_crc ...[2024-05-14 23:20:47.526935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:24.447 [2024-05-14 23:20:47.527088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:24.447 passed 00:05:24.447 Test: blob_flags ...passed 00:05:24.447 Test: bs_version ...passed 00:05:24.447 Test: blob_set_xattrs_test ...[2024-05-14 23:20:47.621586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:24.447 [2024-05-14 23:20:47.621746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:24.447 passed 00:05:24.447 Test: blob_thin_prov_alloc ...passed 00:05:24.706 Test: blob_insert_cluster_msg_test ...passed 00:05:24.706 Test: blob_thin_prov_rw ...passed 00:05:24.706 Test: blob_thin_prov_rle ...passed 00:05:24.706 Test: blob_thin_prov_rw_iov ...passed 00:05:24.706 Test: blob_snapshot_rw ...passed 00:05:24.706 Test: blob_snapshot_rw_iov ...passed 00:05:24.963 Test: blob_inflate_rw ...passed 00:05:24.963 Test: blob_snapshot_freeze_io ...passed 00:05:25.221 Test: blob_operation_split_rw ...passed 00:05:25.221 Test: blob_operation_split_rw_iov ...passed 00:05:25.221 Test: blob_simultaneous_operations ...[2024-05-14 23:20:48.442189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:25.221 [2024-05-14 23:20:48.442335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.221 [2024-05-14 23:20:48.444678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:25.221 [2024-05-14 23:20:48.444797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.221 [2024-05-14 23:20:48.461670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:25.221 [2024-05-14 23:20:48.461752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.221 [2024-05-14 23:20:48.461882] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:25.221 [2024-05-14 23:20:48.461923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:25.221 passed 00:05:25.479 Test: blob_persist_test ...passed 00:05:25.479 Test: blob_decouple_snapshot ...passed 00:05:25.479 Test: blob_seek_io_unit ...passed 00:05:25.479 Test: blob_nested_freezes ...passed 00:05:25.479 Test: blob_clone_resize ...passed 00:05:25.479 Test: blob_shallow_copy ...[2024-05-14 23:20:48.753906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:25.479 [2024-05-14 23:20:48.754636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7315:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:25.479 [2024-05-14 23:20:48.754955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7323:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:25.737 passed 00:05:25.737 Suite: blob_blob_nocopy_noextent 00:05:25.737 Test: blob_write ...passed 00:05:25.737 Test: blob_read ...passed 00:05:25.737 Test: blob_rw_verify ...passed 00:05:25.737 Test: blob_rw_verify_iov_nomem ...passed 00:05:25.737 Test: blob_rw_iov_read_only ...passed 00:05:25.737 Test: blob_xattr ...passed 00:05:25.737 Test: blob_dirty_shutdown ...passed 00:05:25.995 Test: blob_is_degraded ...passed 00:05:25.995 Suite: blob_esnap_bs_nocopy_noextent 00:05:25.995 Test: blob_esnap_create ...passed 00:05:25.995 Test: blob_esnap_thread_add_remove ...passed 00:05:25.995 Test: blob_esnap_clone_snapshot ...passed 00:05:25.995 Test: blob_esnap_clone_inflate ...passed 00:05:25.995 Test: blob_esnap_clone_decouple ...passed 00:05:25.995 Test: blob_esnap_clone_reload ...passed 00:05:25.995 Test: blob_esnap_hotplug ...passed 00:05:25.995 Suite: blob_nocopy_extent 00:05:25.995 Test: blob_init ...[2024-05-14 23:20:49.271801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5463:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:25.995 passed 00:05:26.254 Test: blob_thin_provision ...passed 00:05:26.254 Test: blob_read_only ...passed 00:05:26.254 Test: bs_load ...[2024-05-14 23:20:49.321845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 938:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:26.254 passed 00:05:26.254 Test: bs_load_custom_cluster_size ...passed 00:05:26.254 Test: bs_load_after_failed_grow ...passed 00:05:26.254 Test: bs_cluster_sz ...[2024-05-14 23:20:49.346080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:26.254 [2024-05-14 23:20:49.346657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5594:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:26.254 [2024-05-14 23:20:49.346728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3856:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:26.254 passed 00:05:26.254 Test: bs_resize_md ...passed 00:05:26.254 Test: bs_destroy ...passed 00:05:26.254 Test: bs_type ...passed 00:05:26.254 Test: bs_super_block ...passed 00:05:26.254 Test: bs_test_recover_cluster_count ...passed 00:05:26.254 Test: bs_grow_live ...passed 00:05:26.254 Test: bs_grow_live_no_space ...passed 00:05:26.254 Test: bs_test_grow ...passed 00:05:26.254 Test: blob_serialize_test ...passed 00:05:26.254 Test: super_block_crc ...passed 00:05:26.254 Test: blob_thin_prov_write_count_io ...passed 00:05:26.254 Test: blob_thin_prov_unmap_cluster ...passed 00:05:26.254 Test: bs_load_iter_test ...passed 00:05:26.254 Test: blob_relations ...[2024-05-14 23:20:49.512215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.254 [2024-05-14 23:20:49.512336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.254 [2024-05-14 23:20:49.514264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.254 [2024-05-14 23:20:49.514404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.254 passed 00:05:26.254 Test: blob_relations2 ...[2024-05-14 23:20:49.530692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.254 [2024-05-14 23:20:49.530802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.254 [2024-05-14 23:20:49.530881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.254 [2024-05-14 23:20:49.530913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.254 [2024-05-14 23:20:49.532765] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.254 [2024-05-14 23:20:49.532834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.254 [2024-05-14 23:20:49.533348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:26.254 [2024-05-14 23:20:49.533407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.254 passed 00:05:26.513 Test: blob_relations3 ...passed 00:05:26.513 Test: blobstore_clean_power_failure ...passed 00:05:26.513 Test: blob_delete_snapshot_power_failure ...[2024-05-14 23:20:49.679371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:26.513 [2024-05-14 23:20:49.692058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:26.513 [2024-05-14 23:20:49.703612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:26.513 [2024-05-14 23:20:49.703730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:26.513 [2024-05-14 23:20:49.703826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.513 [2024-05-14 23:20:49.715414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:26.513 [2024-05-14 23:20:49.715529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:26.513 [2024-05-14 23:20:49.715580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:26.513 [2024-05-14 23:20:49.715620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.513 [2024-05-14 23:20:49.728111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:26.513 [2024-05-14 23:20:49.728229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:26.513 [2024-05-14 23:20:49.728293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:26.513 [2024-05-14 23:20:49.728341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.513 [2024-05-14 23:20:49.744015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:26.513 [2024-05-14 23:20:49.744141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.513 [2024-05-14 23:20:49.755532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:26.513 [2024-05-14 23:20:49.755661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.513 [2024-05-14 23:20:49.767572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:26.513 [2024-05-14 23:20:49.767672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:26.513 passed 00:05:26.771 Test: blob_create_snapshot_power_failure ...[2024-05-14 23:20:49.808848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:26.771 [2024-05-14 23:20:49.820408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:26.771 [2024-05-14 23:20:49.843105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:26.771 [2024-05-14 23:20:49.854930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:26.771 passed 00:05:26.771 Test: blob_io_unit ...passed 00:05:26.771 Test: blob_io_unit_compatibility ...passed 00:05:26.771 Test: blob_ext_md_pages ...passed 00:05:26.771 Test: blob_esnap_io_4096_4096 ...passed 00:05:26.771 Test: blob_esnap_io_512_512 ...passed 00:05:26.771 Test: blob_esnap_io_4096_512 ...passed 00:05:26.771 Test: blob_esnap_io_512_4096 ...passed 00:05:26.771 Test: blob_esnap_clone_resize ...passed 00:05:26.771 Suite: blob_bs_nocopy_extent 00:05:27.037 Test: blob_open ...passed 00:05:27.038 Test: blob_create ...[2024-05-14 23:20:50.108066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:27.038 passed 00:05:27.038 Test: blob_create_loop ...passed 00:05:27.038 Test: blob_create_fail ...[2024-05-14 23:20:50.223963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.038 passed 00:05:27.038 Test: blob_create_internal ...passed 00:05:27.038 Test: blob_create_zero_extent ...passed 00:05:27.312 Test: blob_snapshot ...passed 00:05:27.312 Test: blob_clone ...passed 00:05:27.312 Test: blob_inflate ...[2024-05-14 23:20:50.409173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:27.312 passed 00:05:27.312 Test: blob_delete ...passed 00:05:27.312 Test: blob_resize_test ...[2024-05-14 23:20:50.478851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:27.312 passed 00:05:27.312 Test: blob_resize_thin_test ...passed 00:05:27.312 Test: channel_ops ...passed 00:05:27.572 Test: blob_super ...passed 00:05:27.572 Test: blob_rw_verify_iov ...passed 00:05:27.572 Test: blob_unmap ...passed 00:05:27.572 Test: blob_iter ...passed 00:05:27.572 Test: blob_parse_md ...passed 00:05:27.572 Test: bs_load_pending_removal ...passed 00:05:27.572 Test: bs_unload ...[2024-05-14 23:20:50.794905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:27.572 passed 00:05:27.572 Test: bs_usable_clusters ...passed 00:05:27.830 Test: blob_crc ...[2024-05-14 23:20:50.861159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:27.830 [2024-05-14 23:20:50.861314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:27.830 passed 00:05:27.830 Test: blob_flags ...passed 00:05:27.830 Test: bs_version ...passed 00:05:27.830 Test: blob_set_xattrs_test ...[2024-05-14 23:20:50.962388] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.830 [2024-05-14 23:20:50.962507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:27.830 passed 00:05:27.830 Test: blob_thin_prov_alloc ...passed 00:05:27.830 Test: blob_insert_cluster_msg_test ...passed 00:05:27.830 Test: blob_thin_prov_rw ...passed 00:05:28.088 Test: blob_thin_prov_rle ...passed 00:05:28.088 Test: blob_thin_prov_rw_iov ...passed 00:05:28.088 Test: blob_snapshot_rw ...passed 00:05:28.088 Test: blob_snapshot_rw_iov ...passed 00:05:28.346 Test: blob_inflate_rw ...passed 00:05:28.346 Test: blob_snapshot_freeze_io ...passed 00:05:28.346 Test: blob_operation_split_rw ...passed 00:05:28.605 Test: blob_operation_split_rw_iov ...passed 00:05:28.605 Test: blob_simultaneous_operations ...[2024-05-14 23:20:51.772487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.605 [2024-05-14 23:20:51.772600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.605 [2024-05-14 23:20:51.774193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.605 [2024-05-14 23:20:51.774253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.605 [2024-05-14 23:20:51.787781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.605 [2024-05-14 23:20:51.787850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.605 [2024-05-14 23:20:51.787954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:28.605 [2024-05-14 23:20:51.787979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:28.605 passed 00:05:28.605 Test: blob_persist_test ...passed 00:05:28.864 Test: blob_decouple_snapshot ...passed 00:05:28.864 Test: blob_seek_io_unit ...passed 00:05:28.864 Test: blob_nested_freezes ...passed 00:05:28.864 Test: blob_clone_resize ...passed 00:05:28.864 Test: blob_shallow_copy ...[2024-05-14 23:20:52.028077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:28.864 [2024-05-14 23:20:52.028538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7315:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:28.864 [2024-05-14 23:20:52.028748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7323:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:28.864 passed 00:05:28.864 Suite: blob_blob_nocopy_extent 00:05:28.864 Test: blob_write ...passed 00:05:28.864 Test: blob_read ...passed 00:05:28.864 Test: blob_rw_verify ...passed 00:05:29.123 Test: blob_rw_verify_iov_nomem ...passed 00:05:29.123 Test: blob_rw_iov_read_only ...passed 00:05:29.123 Test: blob_xattr ...passed 00:05:29.123 Test: blob_dirty_shutdown ...passed 00:05:29.123 Test: blob_is_degraded ...passed 00:05:29.123 Suite: blob_esnap_bs_nocopy_extent 00:05:29.123 Test: blob_esnap_create ...passed 00:05:29.123 Test: blob_esnap_thread_add_remove ...passed 00:05:29.123 Test: blob_esnap_clone_snapshot ...passed 00:05:29.123 Test: blob_esnap_clone_inflate ...passed 00:05:29.381 Test: blob_esnap_clone_decouple ...passed 00:05:29.381 Test: blob_esnap_clone_reload ...passed 00:05:29.381 Test: blob_esnap_hotplug ...passed 00:05:29.381 Suite: blob_copy_noextent 00:05:29.381 Test: blob_init ...[2024-05-14 23:20:52.475939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5463:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:29.381 passed 00:05:29.381 Test: blob_thin_provision ...passed 00:05:29.381 Test: blob_read_only ...passed 00:05:29.381 Test: bs_load ...passed 00:05:29.381 Test: bs_load_custom_cluster_size ...[2024-05-14 23:20:52.514261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 938:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:29.381 passed 00:05:29.381 Test: bs_load_after_failed_grow ...passed 00:05:29.381 Test: bs_cluster_sz ...[2024-05-14 23:20:52.534751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:29.381 [2024-05-14 23:20:52.534875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5594:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:29.381 [2024-05-14 23:20:52.534931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3856:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:29.381 passed 00:05:29.381 Test: bs_resize_md ...passed 00:05:29.381 Test: bs_destroy ...passed 00:05:29.381 Test: bs_type ...passed 00:05:29.381 Test: bs_super_block ...passed 00:05:29.381 Test: bs_test_recover_cluster_count ...passed 00:05:29.381 Test: bs_grow_live ...passed 00:05:29.381 Test: bs_grow_live_no_space ...passed 00:05:29.381 Test: bs_test_grow ...passed 00:05:29.381 Test: blob_serialize_test ...passed 00:05:29.381 Test: super_block_crc ...passed 00:05:29.381 Test: blob_thin_prov_write_count_io ...passed 00:05:29.381 Test: blob_thin_prov_unmap_cluster ...passed 00:05:29.381 Test: bs_load_iter_test ...passed 00:05:29.381 Test: blob_relations ...[2024-05-14 23:20:52.667574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.381 [2024-05-14 23:20:52.667672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.381 [2024-05-14 23:20:52.668107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.381 [2024-05-14 23:20:52.668133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.639 passed 00:05:29.639 Test: blob_relations2 ...[2024-05-14 23:20:52.679338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.639 [2024-05-14 23:20:52.679403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.639 [2024-05-14 23:20:52.679453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.639 [2024-05-14 23:20:52.679471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.639 [2024-05-14 23:20:52.680143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.639 [2024-05-14 23:20:52.680517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.639 [2024-05-14 23:20:52.680759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:29.639 [2024-05-14 23:20:52.680785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.639 passed 00:05:29.639 Test: blob_relations3 ...passed 00:05:29.639 Test: blobstore_clean_power_failure ...passed 00:05:29.640 Test: blob_delete_snapshot_power_failure ...[2024-05-14 23:20:52.807053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:29.640 [2024-05-14 23:20:52.816579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:29.640 [2024-05-14 23:20:52.816668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:29.640 [2024-05-14 23:20:52.816712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.640 [2024-05-14 23:20:52.827381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:29.640 [2024-05-14 23:20:52.827468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:29.640 [2024-05-14 23:20:52.827501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:29.640 [2024-05-14 23:20:52.827524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.640 [2024-05-14 23:20:52.841723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:29.640 [2024-05-14 23:20:52.841907] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.640 [2024-05-14 23:20:52.852238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:29.640 [2024-05-14 23:20:52.852351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.640 [2024-05-14 23:20:52.862148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:29.640 [2024-05-14 23:20:52.862475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:29.640 passed 00:05:29.640 Test: blob_create_snapshot_power_failure ...[2024-05-14 23:20:52.892596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:29.640 [2024-05-14 23:20:52.910639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:29.640 [2024-05-14 23:20:52.923819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:29.898 passed 00:05:29.898 Test: blob_io_unit ...passed 00:05:29.898 Test: blob_io_unit_compatibility ...passed 00:05:29.898 Test: blob_ext_md_pages ...passed 00:05:29.898 Test: blob_esnap_io_4096_4096 ...passed 00:05:29.898 Test: blob_esnap_io_512_512 ...passed 00:05:29.898 Test: blob_esnap_io_4096_512 ...passed 00:05:29.898 Test: blob_esnap_io_512_4096 ...passed 00:05:29.898 Test: blob_esnap_clone_resize ...passed 00:05:29.898 Suite: blob_bs_copy_noextent 00:05:29.898 Test: blob_open ...passed 00:05:29.898 Test: blob_create ...[2024-05-14 23:20:53.128482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:29.898 passed 00:05:30.156 Test: blob_create_loop ...passed 00:05:30.156 Test: blob_create_fail ...[2024-05-14 23:20:53.217233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:30.156 passed 00:05:30.156 Test: blob_create_internal ...passed 00:05:30.156 Test: blob_create_zero_extent ...passed 00:05:30.156 Test: blob_snapshot ...passed 00:05:30.156 Test: blob_clone ...passed 00:05:30.156 Test: blob_inflate ...[2024-05-14 23:20:53.368917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:30.156 passed 00:05:30.156 Test: blob_delete ...passed 00:05:30.156 Test: blob_resize_test ...[2024-05-14 23:20:53.425989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:30.156 passed 00:05:30.414 Test: blob_resize_thin_test ...passed 00:05:30.414 Test: channel_ops ...passed 00:05:30.414 Test: blob_super ...passed 00:05:30.414 Test: blob_rw_verify_iov ...passed 00:05:30.414 Test: blob_unmap ...passed 00:05:30.414 Test: blob_iter ...passed 00:05:30.414 Test: blob_parse_md ...passed 00:05:30.414 Test: bs_load_pending_removal ...passed 00:05:30.672 Test: bs_unload ...[2024-05-14 23:20:53.704427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:30.672 passed 00:05:30.672 Test: bs_usable_clusters ...passed 00:05:30.672 Test: blob_crc ...[2024-05-14 23:20:53.770282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:30.672 [2024-05-14 23:20:53.770414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:30.672 passed 00:05:30.672 Test: blob_flags ...passed 00:05:30.672 Test: bs_version ...passed 00:05:30.672 Test: blob_set_xattrs_test ...[2024-05-14 23:20:53.869302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:30.672 [2024-05-14 23:20:53.869426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:30.672 passed 00:05:30.672 Test: blob_thin_prov_alloc ...passed 00:05:30.930 Test: blob_insert_cluster_msg_test ...passed 00:05:30.930 Test: blob_thin_prov_rw ...passed 00:05:30.930 Test: blob_thin_prov_rle ...passed 00:05:30.930 Test: blob_thin_prov_rw_iov ...passed 00:05:30.930 Test: blob_snapshot_rw ...passed 00:05:30.930 Test: blob_snapshot_rw_iov ...passed 00:05:31.187 Test: blob_inflate_rw ...passed 00:05:31.187 Test: blob_snapshot_freeze_io ...passed 00:05:31.446 Test: blob_operation_split_rw ...passed 00:05:31.446 Test: blob_operation_split_rw_iov ...passed 00:05:31.446 Test: blob_simultaneous_operations ...[2024-05-14 23:20:54.624656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.446 [2024-05-14 23:20:54.624759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.446 [2024-05-14 23:20:54.625525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.446 [2024-05-14 23:20:54.625593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.446 [2024-05-14 23:20:54.629210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.446 [2024-05-14 23:20:54.629274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.446 [2024-05-14 23:20:54.629402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:31.446 [2024-05-14 23:20:54.629436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:31.446 passed 00:05:31.446 Test: blob_persist_test ...passed 00:05:31.446 Test: blob_decouple_snapshot ...passed 00:05:31.704 Test: blob_seek_io_unit ...passed 00:05:31.704 Test: blob_nested_freezes ...passed 00:05:31.704 Test: blob_clone_resize ...passed 00:05:31.704 Test: blob_shallow_copy ...[2024-05-14 23:20:54.853011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:31.704 [2024-05-14 23:20:54.854071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7315:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:31.704 [2024-05-14 23:20:54.854355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7323:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:31.704 passed 00:05:31.704 Suite: blob_blob_copy_noextent 00:05:31.704 Test: blob_write ...passed 00:05:31.704 Test: blob_read ...passed 00:05:31.704 Test: blob_rw_verify ...passed 00:05:31.704 Test: blob_rw_verify_iov_nomem ...passed 00:05:31.963 Test: blob_rw_iov_read_only ...passed 00:05:31.963 Test: blob_xattr ...passed 00:05:31.963 Test: blob_dirty_shutdown ...passed 00:05:31.963 Test: blob_is_degraded ...passed 00:05:31.963 Suite: blob_esnap_bs_copy_noextent 00:05:31.963 Test: blob_esnap_create ...passed 00:05:31.963 Test: blob_esnap_thread_add_remove ...passed 00:05:31.963 Test: blob_esnap_clone_snapshot ...passed 00:05:32.221 Test: blob_esnap_clone_inflate ...passed 00:05:32.221 Test: blob_esnap_clone_decouple ...passed 00:05:32.221 Test: blob_esnap_clone_reload ...passed 00:05:32.221 Test: blob_esnap_hotplug ...passed 00:05:32.221 Suite: blob_copy_extent 00:05:32.221 Test: blob_init ...[2024-05-14 23:20:55.353252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5463:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:32.221 passed 00:05:32.221 Test: blob_thin_provision ...passed 00:05:32.221 Test: blob_read_only ...passed 00:05:32.221 Test: bs_load ...[2024-05-14 23:20:55.397693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 938:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:32.221 passed 00:05:32.221 Test: bs_load_custom_cluster_size ...passed 00:05:32.221 Test: bs_load_after_failed_grow ...passed 00:05:32.221 Test: bs_cluster_sz ...[2024-05-14 23:20:55.420434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3797:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:32.221 [2024-05-14 23:20:55.420585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5594:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:32.221 [2024-05-14 23:20:55.420632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3856:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:32.221 passed 00:05:32.221 Test: bs_resize_md ...passed 00:05:32.221 Test: bs_destroy ...passed 00:05:32.221 Test: bs_type ...passed 00:05:32.221 Test: bs_super_block ...passed 00:05:32.221 Test: bs_test_recover_cluster_count ...passed 00:05:32.221 Test: bs_grow_live ...passed 00:05:32.221 Test: bs_grow_live_no_space ...passed 00:05:32.221 Test: bs_test_grow ...passed 00:05:32.221 Test: blob_serialize_test ...passed 00:05:32.480 Test: super_block_crc ...passed 00:05:32.480 Test: blob_thin_prov_write_count_io ...passed 00:05:32.480 Test: blob_thin_prov_unmap_cluster ...passed 00:05:32.480 Test: bs_load_iter_test ...passed 00:05:32.480 Test: blob_relations ...[2024-05-14 23:20:55.561334] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.480 [2024-05-14 23:20:55.561439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.480 [2024-05-14 23:20:55.561939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.480 [2024-05-14 23:20:55.561965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.480 passed 00:05:32.480 Test: blob_relations2 ...[2024-05-14 23:20:55.573292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.480 [2024-05-14 23:20:55.573382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.480 [2024-05-14 23:20:55.573436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.480 [2024-05-14 23:20:55.573455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.480 [2024-05-14 23:20:55.574170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.480 [2024-05-14 23:20:55.574508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.480 [2024-05-14 23:20:55.574931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7950:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:32.480 [2024-05-14 23:20:55.574959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.480 passed 00:05:32.480 Test: blob_relations3 ...passed 00:05:32.480 Test: blobstore_clean_power_failure ...passed 00:05:32.480 Test: blob_delete_snapshot_power_failure ...[2024-05-14 23:20:55.733729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:32.480 [2024-05-14 23:20:55.745861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:32.480 [2024-05-14 23:20:55.758120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:32.480 [2024-05-14 23:20:55.758228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:32.480 [2024-05-14 23:20:55.758274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.740 [2024-05-14 23:20:55.770045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:32.740 [2024-05-14 23:20:55.770124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:32.740 [2024-05-14 23:20:55.770379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:32.740 [2024-05-14 23:20:55.770453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.740 [2024-05-14 23:20:55.784959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:32.740 [2024-05-14 23:20:55.785065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1439:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:32.740 [2024-05-14 23:20:55.785094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7864:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:32.740 [2024-05-14 23:20:55.785121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.740 [2024-05-14 23:20:55.798492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7791:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:32.740 [2024-05-14 23:20:55.798646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.740 [2024-05-14 23:20:55.811820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7660:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:32.740 [2024-05-14 23:20:55.811974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.740 [2024-05-14 23:20:55.827546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:32.740 [2024-05-14 23:20:55.827698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:32.740 passed 00:05:32.740 Test: blob_create_snapshot_power_failure ...[2024-05-14 23:20:55.861990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:32.740 [2024-05-14 23:20:55.873184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1552:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:32.740 [2024-05-14 23:20:55.895524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1642:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:32.740 [2024-05-14 23:20:55.907874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6419:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:32.740 passed 00:05:32.740 Test: blob_io_unit ...passed 00:05:32.740 Test: blob_io_unit_compatibility ...passed 00:05:32.740 Test: blob_ext_md_pages ...passed 00:05:32.740 Test: blob_esnap_io_4096_4096 ...passed 00:05:32.999 Test: blob_esnap_io_512_512 ...passed 00:05:32.999 Test: blob_esnap_io_4096_512 ...passed 00:05:32.999 Test: blob_esnap_io_512_4096 ...passed 00:05:32.999 Test: blob_esnap_clone_resize ...passed 00:05:32.999 Suite: blob_bs_copy_extent 00:05:32.999 Test: blob_open ...passed 00:05:32.999 Test: blob_create ...[2024-05-14 23:20:56.166989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:32.999 passed 00:05:32.999 Test: blob_create_loop ...passed 00:05:32.999 Test: blob_create_fail ...[2024-05-14 23:20:56.285912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:33.258 passed 00:05:33.258 Test: blob_create_internal ...passed 00:05:33.258 Test: blob_create_zero_extent ...passed 00:05:33.258 Test: blob_snapshot ...passed 00:05:33.258 Test: blob_clone ...passed 00:05:33.258 Test: blob_inflate ...[2024-05-14 23:20:56.469527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7082:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:33.258 passed 00:05:33.258 Test: blob_delete ...passed 00:05:33.258 Test: blob_resize_test ...[2024-05-14 23:20:56.537918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7409:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:33.517 passed 00:05:33.517 Test: blob_resize_thin_test ...passed 00:05:33.517 Test: channel_ops ...passed 00:05:33.517 Test: blob_super ...passed 00:05:33.517 Test: blob_rw_verify_iov ...passed 00:05:33.517 Test: blob_unmap ...passed 00:05:33.517 Test: blob_iter ...passed 00:05:33.517 Test: blob_parse_md ...passed 00:05:33.775 Test: bs_load_pending_removal ...passed 00:05:33.776 Test: bs_unload ...[2024-05-14 23:20:56.860460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5851:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:33.776 passed 00:05:33.776 Test: bs_usable_clusters ...passed 00:05:33.776 Test: blob_crc ...[2024-05-14 23:20:56.936077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:33.776 [2024-05-14 23:20:56.936473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1651:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:33.776 passed 00:05:33.776 Test: blob_flags ...passed 00:05:33.776 Test: bs_version ...passed 00:05:33.776 Test: blob_set_xattrs_test ...[2024-05-14 23:20:57.043433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:33.776 [2024-05-14 23:20:57.043569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6300:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:33.776 passed 00:05:34.042 Test: blob_thin_prov_alloc ...passed 00:05:34.042 Test: blob_insert_cluster_msg_test ...passed 00:05:34.043 Test: blob_thin_prov_rw ...passed 00:05:34.043 Test: blob_thin_prov_rle ...passed 00:05:34.043 Test: blob_thin_prov_rw_iov ...passed 00:05:34.043 Test: blob_snapshot_rw ...passed 00:05:34.315 Test: blob_snapshot_rw_iov ...passed 00:05:34.315 Test: blob_inflate_rw ...passed 00:05:34.315 Test: blob_snapshot_freeze_io ...passed 00:05:34.573 Test: blob_operation_split_rw ...passed 00:05:34.573 Test: blob_operation_split_rw_iov ...passed 00:05:34.573 Test: blob_simultaneous_operations ...[2024-05-14 23:20:57.825796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:34.573 [2024-05-14 23:20:57.825909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.573 [2024-05-14 23:20:57.826534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:34.573 [2024-05-14 23:20:57.826600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.573 [2024-05-14 23:20:57.829227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:34.573 [2024-05-14 23:20:57.829258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.573 [2024-05-14 23:20:57.829331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7977:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:34.573 [2024-05-14 23:20:57.829352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7917:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:34.573 passed 00:05:34.832 Test: blob_persist_test ...passed 00:05:34.832 Test: blob_decouple_snapshot ...passed 00:05:34.832 Test: blob_seek_io_unit ...passed 00:05:34.832 Test: blob_nested_freezes ...passed 00:05:34.832 Test: blob_clone_resize ...passed 00:05:34.832 Test: blob_shallow_copy ...[2024-05-14 23:20:58.065391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7305:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:34.832 [2024-05-14 23:20:58.065733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7315:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:34.832 [2024-05-14 23:20:58.065951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7323:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:34.832 passed 00:05:34.832 Suite: blob_blob_copy_extent 00:05:34.832 Test: blob_write ...passed 00:05:35.089 Test: blob_read ...passed 00:05:35.089 Test: blob_rw_verify ...passed 00:05:35.089 Test: blob_rw_verify_iov_nomem ...passed 00:05:35.089 Test: blob_rw_iov_read_only ...passed 00:05:35.089 Test: blob_xattr ...passed 00:05:35.089 Test: blob_dirty_shutdown ...passed 00:05:35.089 Test: blob_is_degraded ...passed 00:05:35.089 Suite: blob_esnap_bs_copy_extent 00:05:35.347 Test: blob_esnap_create ...passed 00:05:35.347 Test: blob_esnap_thread_add_remove ...passed 00:05:35.347 Test: blob_esnap_clone_snapshot ...passed 00:05:35.347 Test: blob_esnap_clone_inflate ...passed 00:05:35.347 Test: blob_esnap_clone_decouple ...passed 00:05:35.347 Test: blob_esnap_clone_reload ...passed 00:05:35.347 Test: blob_esnap_hotplug ...passed 00:05:35.347 00:05:35.347 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.347 suites 16 16 n/a 0 0 00:05:35.347 tests 368 368 368 0 0 00:05:35.347 asserts 142985 142985 142985 0 n/a 00:05:35.347 00:05:35.347 Elapsed time = 12.570 seconds 00:05:35.606 23:20:58 unittest.unittest_blob_blobfs -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:05:35.606 00:05:35.606 00:05:35.606 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.606 http://cunit.sourceforge.net/ 00:05:35.606 00:05:35.606 00:05:35.606 Suite: blob_bdev 00:05:35.606 Test: create_bs_dev ...passed 00:05:35.606 Test: create_bs_dev_ro ...passed 00:05:35.606 Test: create_bs_dev_rw ...passed 00:05:35.606 Test: claim_bs_dev ...passed 00:05:35.606 Test: claim_bs_dev_ro ...passed 00:05:35.606 Test: deferred_destroy_refs ...passed 00:05:35.606 Test: deferred_destroy_channels ...passed 00:05:35.606 Test: deferred_destroy_threads ...passed 00:05:35.606 00:05:35.606 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.606 suites 1 1 n/a 0 0 00:05:35.606 tests 8 8 8 0 0 00:05:35.606 asserts 119 119 119 0 n/a 00:05:35.606 00:05:35.606 Elapsed time = 0.000 seconds 00:05:35.606 [2024-05-14 23:20:58.686288] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:05:35.606 [2024-05-14 23:20:58.686507] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:05:35.606 23:20:58 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:05:35.606 00:05:35.606 00:05:35.606 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.606 http://cunit.sourceforge.net/ 00:05:35.606 00:05:35.606 00:05:35.606 Suite: tree 00:05:35.606 Test: blobfs_tree_op_test ...passed 00:05:35.606 00:05:35.606 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.606 suites 1 1 n/a 0 0 00:05:35.606 tests 1 1 1 0 0 00:05:35.606 asserts 27 27 27 0 n/a 00:05:35.606 00:05:35.606 Elapsed time = 0.000 seconds 00:05:35.606 23:20:58 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:05:35.606 00:05:35.606 00:05:35.606 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.606 http://cunit.sourceforge.net/ 00:05:35.606 00:05:35.606 00:05:35.606 Suite: blobfs_async_ut 00:05:35.606 Test: fs_init ...passed 00:05:35.606 Test: fs_open ...passed 00:05:35.606 Test: fs_create ...passed 00:05:35.606 Test: fs_truncate ...passed 00:05:35.606 Test: fs_rename ...[2024-05-14 23:20:58.831675] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:05:35.606 passed 00:05:35.606 Test: fs_rw_async ...passed 00:05:35.606 Test: fs_writev_readv_async ...passed 00:05:35.606 Test: tree_find_buffer_ut ...passed 00:05:35.606 Test: channel_ops ...passed 00:05:35.865 Test: channel_ops_sync ...passed 00:05:35.865 00:05:35.865 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.865 suites 1 1 n/a 0 0 00:05:35.865 tests 10 10 10 0 0 00:05:35.865 asserts 292 292 292 0 n/a 00:05:35.865 00:05:35.865 Elapsed time = 0.150 seconds 00:05:35.865 23:20:58 unittest.unittest_blob_blobfs -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:05:35.865 00:05:35.865 00:05:35.865 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.865 http://cunit.sourceforge.net/ 00:05:35.865 00:05:35.865 00:05:35.865 Suite: blobfs_sync_ut 00:05:35.865 Test: cache_read_after_write ...passed 00:05:35.865 Test: file_length ...[2024-05-14 23:20:58.974551] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:05:35.865 passed 00:05:35.865 Test: append_write_to_extend_blob ...passed 00:05:35.865 Test: partial_buffer ...passed 00:05:35.865 Test: cache_write_null_buffer ...passed 00:05:35.865 Test: fs_create_sync ...passed 00:05:35.865 Test: fs_rename_sync ...passed 00:05:35.865 Test: cache_append_no_cache ...passed 00:05:35.865 Test: fs_delete_file_without_close ...passed 00:05:35.865 00:05:35.865 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.865 suites 1 1 n/a 0 0 00:05:35.865 tests 9 9 9 0 0 00:05:35.865 asserts 345 345 345 0 n/a 00:05:35.865 00:05:35.865 Elapsed time = 0.290 seconds 00:05:35.865 23:20:59 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:05:35.865 00:05:35.865 00:05:35.865 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.865 http://cunit.sourceforge.net/ 00:05:35.865 00:05:35.865 00:05:35.865 Suite: blobfs_bdev_ut 00:05:35.865 Test: spdk_blobfs_bdev_detect_test ...passed 00:05:35.865 Test: spdk_blobfs_bdev_create_test ...passed 00:05:35.865 Test: spdk_blobfs_bdev_mount_test ...passed 00:05:35.865 00:05:35.865 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.865 suites 1 1 n/a 0 0 00:05:35.865 tests 3 3 3 0 0 00:05:35.865 asserts 9 9 9 0 n/a 00:05:35.865 00:05:35.865 Elapsed time = 0.010 seconds 00:05:35.865 [2024-05-14 23:20:59.138312] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:05:35.865 [2024-05-14 23:20:59.138561] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:05:35.865 ************************************ 00:05:35.865 END TEST unittest_blob_blobfs 00:05:35.865 ************************************ 00:05:35.865 00:05:35.865 real 0m13.224s 00:05:35.865 user 0m12.549s 00:05:35.865 sys 0m0.753s 00:05:35.865 23:20:59 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.865 23:20:59 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:05:36.125 23:20:59 unittest -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:05:36.125 23:20:59 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.125 23:20:59 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.125 23:20:59 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.125 ************************************ 00:05:36.125 START TEST unittest_event 00:05:36.125 ************************************ 00:05:36.125 23:20:59 unittest.unittest_event -- common/autotest_common.sh@1121 -- # unittest_event 00:05:36.125 23:20:59 unittest.unittest_event -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:05:36.125 00:05:36.125 00:05:36.125 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.125 http://cunit.sourceforge.net/ 00:05:36.125 00:05:36.125 00:05:36.125 Suite: app_suite 00:05:36.125 Test: test_spdk_app_parse_args ...app_ut [options] 00:05:36.125 00:05:36.125 CPU options: 00:05:36.125 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:36.125 (like [0,1,10]) 00:05:36.125 --lcores lcore to CPU mapping list. The list is in the format: 00:05:36.125 [<,lcores[@CPUs]>...] 00:05:36.125 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:36.125 Within the group, '-' is used for range separator, 00:05:36.125 ',' is used for single number separator. 00:05:36.125 '( )' can be omitted for single element group, 00:05:36.125 '@' can be omitted if cpus and lcores have the same value 00:05:36.125 --disable-cpumask-locks Disable CPU core lock files. 00:05:36.125 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:36.125 pollers in the app support interrupt mode) 00:05:36.125 -p, --main-core main (primary) core for DPDK 00:05:36.125 00:05:36.125 Configuration options: 00:05:36.125 -c, --config, --json JSON config file 00:05:36.125 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:36.125 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:36.125 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:36.125 --rpcs-allowed comma-separated list of permitted RPCS 00:05:36.125 --json-ignore-init-errors don't exit on invalid config entry 00:05:36.125 00:05:36.125 Memory options: 00:05:36.125 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:36.125 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:36.125 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:36.125 -R, --huge-unlink unlink huge files after initialization 00:05:36.125 -n, --mem-channels number of memory channels used for DPDK 00:05:36.125 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:05:36.125 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:36.125 --no-huge run without using hugepages 00:05:36.125 -i, --shm-id shared memory ID (optional) 00:05:36.125 -g, --single-file-segments force creating just one hugetlbfs file 00:05:36.125 00:05:36.125 PCI options: 00:05:36.125 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:36.125 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:36.125 -u, --no-pci disable PCI access 00:05:36.125 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:36.125 00:05:36.125 Log options: 00:05:36.125 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:05:36.125 --silence-noticelog disable notice level logging to stderr 00:05:36.125 00:05:36.125 Trace options: 00:05:36.125 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:36.125 setting 0 to disable trace (default 32768) 00:05:36.125 Tracepoints vary in size and can use more than one trace entry. 00:05:36.125 -e, --tpoint-group [:] 00:05:36.125 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:36.125 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:36.125 a tracepoint group. First tpoint inside a group can be enabled by 00:05:36.125 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:36.125 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:36.125 in /include/spdk_internal/trace_defs.h 00:05:36.125 00:05:36.125 Other options: 00:05:36.125 -h, --help show this usage 00:05:36.125 -v, --version print SPDK version 00:05:36.125 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:36.125 --env-context Opaque context for use of the env implementation 00:05:36.125 app_ut [options] 00:05:36.125 00:05:36.125 CPU options: 00:05:36.125 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:36.125 (like [0,1,10]) 00:05:36.125 --lcores lcore to CPU mapping list. The list is in the format: 00:05:36.125 [<,lcores[@CPUs]>...] 00:05:36.125 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:36.125 Within the group, '-' is used for range separator, 00:05:36.125 ',' is used for single number separator. 00:05:36.125 '( )' can be omitted for single element group, 00:05:36.125 '@' can be omitted if cpus and lcores have the same value 00:05:36.125 --disable-cpumask-locks Disable CPU core lock files. 00:05:36.126 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:36.126 pollers in the app support interrupt mode) 00:05:36.126 -p, --main-core main (primary) core for DPDK 00:05:36.126 00:05:36.126 Configuration options: 00:05:36.126 -c, --config, --json JSON config file 00:05:36.126 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:36.126 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:36.126 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:36.126 --rpcs-allowed comma-separated list of permitted RPCS 00:05:36.126 --json-ignore-init-errors don't exit on invalid config entry 00:05:36.126 00:05:36.126 Memory options: 00:05:36.126 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:36.126 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:36.126 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:36.126 -R, --huge-unlink unlink huge files after initialization 00:05:36.126 -n, --mem-channels number of memory channels used for DPDK 00:05:36.126 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:05:36.126 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:36.126 --no-huge run without using hugepages 00:05:36.126 -i, --shm-id shared memory ID (optional) 00:05:36.126 -g, --single-file-segments force creating just one hugetlbfs file 00:05:36.126 00:05:36.126 PCI options: 00:05:36.126 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:36.126 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:36.126 -u, --no-pci disable PCI access 00:05:36.126 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:36.126 00:05:36.126 Log options: 00:05:36.126 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:05:36.126 --silence-noticelog disable notice level logging to stderr 00:05:36.126 00:05:36.126 Trace options: 00:05:36.126 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:36.126 setting 0 to disable trace (default 32768) 00:05:36.126 Tracepoints vary in size and can use more than one trace entry. 00:05:36.126 -e, --tpoint-group [:] 00:05:36.126 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:36.126 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:36.126 a tracepoint group. First tpoint inside a group can be enabled by 00:05:36.126 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:36.126 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:36.126 in /include/spdk_internal/trace_defs.h 00:05:36.126 00:05:36.126 Other options: 00:05:36.126 -h, --help show this usage 00:05:36.126 -v, --version print SPDK version 00:05:36.126 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:36.126 --env-context Opaque context for use of the env implementation 00:05:36.126 app_ut [options] 00:05:36.126 00:05:36.126 CPU options: 00:05:36.126 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:05:36.126 (like [0,1,10]) 00:05:36.126 --lcores lcore to CPU mapping list. The list is in the format: 00:05:36.126 [<,lcores[@CPUs]>...] 00:05:36.126 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:05:36.126 Within the group, '-' is used for range separator, 00:05:36.126 ',' is used for single number separator. 00:05:36.126 '( )' can be omitted for single element group, 00:05:36.126 '@' can be omitted if cpus and lcores have the same value 00:05:36.126 --disable-cpumask-locks Disable CPU core lock files. 00:05:36.126 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:05:36.126 pollers in the app support interrupt mode) 00:05:36.126 -p, --main-core main (primary) core for DPDK 00:05:36.126 00:05:36.126 Configuration options: 00:05:36.126 -c, --config, --json JSON config file 00:05:36.126 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:05:36.126 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:05:36.126 --wait-for-rpc wait for RPCs to initialize subsystems 00:05:36.126 --rpcs-allowed comma-separated list of permitted RPCS 00:05:36.126 --json-ignore-init-errors don't exit on invalid config entry 00:05:36.126 00:05:36.126 Memory options: 00:05:36.126 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:05:36.126 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:05:36.126 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:05:36.126 -R, --huge-unlink unlink huge files after initialization 00:05:36.126 -n, --mem-channels number of memory channels used for DPDK 00:05:36.126 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:05:36.126 --msg-mempool-size global message memory pool size in count (default: 262143) 00:05:36.126 --no-huge run without using hugepages 00:05:36.126 -i, --shm-id shared memory ID (optional) 00:05:36.126 -g, --single-file-segments force creating just one hugetlbfs file 00:05:36.126 00:05:36.126 PCI options: 00:05:36.126 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:05:36.126 -B, --pci-blocked pci addr to block (can be used more than once) 00:05:36.126 -u, --no-pci disable PCI access 00:05:36.126 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:05:36.126 00:05:36.126 Log options: 00:05:36.126 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:05:36.126 --silence-noticelog disable notice level logging to stderr 00:05:36.126 00:05:36.126 Trace options: 00:05:36.126 --num-trace-entries number of trace entries for each core, must be power of 2, 00:05:36.126 setting 0 to disable trace (default 32768) 00:05:36.126 Tracepoints vary in size and can use more than one trace entry. 00:05:36.126 -e, --tpoint-group [:] 00:05:36.126 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:05:36.126 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:05:36.126 a tracepoint group. First tpoint inside a group can be enabled by 00:05:36.126 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:05:36.126 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:05:36.126 in /include/spdk_internal/trace_defs.h 00:05:36.126 00:05:36.126 Other options: 00:05:36.126 -h, --help show this usage 00:05:36.126 -v, --version print SPDK version 00:05:36.126 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:05:36.126 --env-context Opaque context for use of the env implementation 00:05:36.126 passed 00:05:36.126 00:05:36.126 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.126 suites 1 1 n/a 0 0 00:05:36.126 tests 1 1 1 0 0 00:05:36.126 asserts 8 8 8 0 n/a 00:05:36.126 00:05:36.126 Elapsed time = 0.000 seconds 00:05:36.126 app_ut: invalid option -- 'z' 00:05:36.126 app_ut: unrecognized option '--test-long-opt' 00:05:36.126 [2024-05-14 23:20:59.211885] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:05:36.126 [2024-05-14 23:20:59.212129] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:05:36.126 [2024-05-14 23:20:59.212420] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:05:36.126 23:20:59 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:05:36.126 00:05:36.126 00:05:36.126 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.126 http://cunit.sourceforge.net/ 00:05:36.126 00:05:36.126 00:05:36.126 Suite: app_suite 00:05:36.126 Test: test_create_reactor ...passed 00:05:36.126 Test: test_init_reactors ...passed 00:05:36.126 Test: test_event_call ...passed 00:05:36.126 Test: test_schedule_thread ...passed 00:05:36.126 Test: test_reschedule_thread ...passed 00:05:36.126 Test: test_bind_thread ...passed 00:05:36.126 Test: test_for_each_reactor ...passed 00:05:36.126 Test: test_reactor_stats ...passed 00:05:36.126 Test: test_scheduler ...passed 00:05:36.126 Test: test_governor ...passed 00:05:36.126 00:05:36.126 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.126 suites 1 1 n/a 0 0 00:05:36.126 tests 10 10 10 0 0 00:05:36.126 asserts 344 344 344 0 n/a 00:05:36.126 00:05:36.126 Elapsed time = 0.010 seconds 00:05:36.126 ************************************ 00:05:36.126 END TEST unittest_event 00:05:36.126 ************************************ 00:05:36.126 00:05:36.126 real 0m0.068s 00:05:36.126 user 0m0.037s 00:05:36.126 sys 0m0.031s 00:05:36.126 23:20:59 unittest.unittest_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.126 23:20:59 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:05:36.126 23:20:59 unittest -- unit/unittest.sh@233 -- # uname -s 00:05:36.126 23:20:59 unittest -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:05:36.126 23:20:59 unittest -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:05:36.126 23:20:59 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.126 23:20:59 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.126 23:20:59 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.126 ************************************ 00:05:36.126 START TEST unittest_ftl 00:05:36.126 ************************************ 00:05:36.126 23:20:59 unittest.unittest_ftl -- common/autotest_common.sh@1121 -- # unittest_ftl 00:05:36.126 23:20:59 unittest.unittest_ftl -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:05:36.126 00:05:36.127 00:05:36.127 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.127 http://cunit.sourceforge.net/ 00:05:36.127 00:05:36.127 00:05:36.127 Suite: ftl_band_suite 00:05:36.127 Test: test_band_block_offset_from_addr_base ...passed 00:05:36.127 Test: test_band_block_offset_from_addr_offset ...passed 00:05:36.386 Test: test_band_addr_from_block_offset ...passed 00:05:36.386 Test: test_band_set_addr ...passed 00:05:36.386 Test: test_invalidate_addr ...passed 00:05:36.386 Test: test_next_xfer_addr ...passed 00:05:36.386 00:05:36.386 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.386 suites 1 1 n/a 0 0 00:05:36.386 tests 6 6 6 0 0 00:05:36.386 asserts 30356 30356 30356 0 n/a 00:05:36.386 00:05:36.386 Elapsed time = 0.180 seconds 00:05:36.386 23:20:59 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:05:36.386 00:05:36.386 00:05:36.386 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.386 http://cunit.sourceforge.net/ 00:05:36.386 00:05:36.386 00:05:36.386 Suite: ftl_bitmap 00:05:36.386 Test: test_ftl_bitmap_create ...passed 00:05:36.386 Test: test_ftl_bitmap_get ...passed 00:05:36.386 Test: test_ftl_bitmap_set ...passed 00:05:36.386 Test: test_ftl_bitmap_clear ...passed 00:05:36.386 Test: test_ftl_bitmap_find_first_set ...passed 00:05:36.386 Test: test_ftl_bitmap_find_first_clear ...passed 00:05:36.386 Test: test_ftl_bitmap_count_set ...passed 00:05:36.386 00:05:36.386 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.386 suites 1 1 n/a 0 0 00:05:36.386 tests 7 7 7 0 0 00:05:36.386 asserts 137 137 137 0 n/a 00:05:36.386 00:05:36.386 Elapsed time = 0.000 seconds 00:05:36.386 [2024-05-14 23:20:59.587946] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:05:36.386 [2024-05-14 23:20:59.588191] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:05:36.386 23:20:59 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:05:36.386 00:05:36.386 00:05:36.386 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.386 http://cunit.sourceforge.net/ 00:05:36.386 00:05:36.386 00:05:36.386 Suite: ftl_io_suite 00:05:36.386 Test: test_completion ...passed 00:05:36.386 Test: test_multiple_ios ...passed 00:05:36.386 00:05:36.386 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.386 suites 1 1 n/a 0 0 00:05:36.386 tests 2 2 2 0 0 00:05:36.386 asserts 47 47 47 0 n/a 00:05:36.386 00:05:36.386 Elapsed time = 0.000 seconds 00:05:36.386 23:20:59 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:05:36.386 00:05:36.386 00:05:36.386 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.386 http://cunit.sourceforge.net/ 00:05:36.386 00:05:36.386 00:05:36.386 Suite: ftl_mngt 00:05:36.386 Test: test_next_step ...passed 00:05:36.386 Test: test_continue_step ...passed 00:05:36.386 Test: test_get_func_and_step_cntx_alloc ...passed 00:05:36.386 Test: test_fail_step ...passed 00:05:36.386 Test: test_mngt_call_and_call_rollback ...passed 00:05:36.386 Test: test_nested_process_failure ...passed 00:05:36.386 00:05:36.386 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.386 suites 1 1 n/a 0 0 00:05:36.386 tests 6 6 6 0 0 00:05:36.386 asserts 176 176 176 0 n/a 00:05:36.386 00:05:36.386 Elapsed time = 0.000 seconds 00:05:36.386 23:20:59 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:05:36.386 00:05:36.386 00:05:36.386 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.386 http://cunit.sourceforge.net/ 00:05:36.386 00:05:36.386 00:05:36.386 Suite: ftl_mempool 00:05:36.386 Test: test_ftl_mempool_create ...passed 00:05:36.386 Test: test_ftl_mempool_get_put ...passed 00:05:36.386 00:05:36.386 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.386 suites 1 1 n/a 0 0 00:05:36.386 tests 2 2 2 0 0 00:05:36.386 asserts 36 36 36 0 n/a 00:05:36.386 00:05:36.386 Elapsed time = 0.000 seconds 00:05:36.386 23:20:59 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:05:36.645 00:05:36.645 00:05:36.645 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.645 http://cunit.sourceforge.net/ 00:05:36.645 00:05:36.645 00:05:36.645 Suite: ftl_addr64_suite 00:05:36.645 Test: test_addr_cached ...passed 00:05:36.645 00:05:36.645 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.645 suites 1 1 n/a 0 0 00:05:36.645 tests 1 1 1 0 0 00:05:36.645 asserts 1536 1536 1536 0 n/a 00:05:36.645 00:05:36.645 Elapsed time = 0.000 seconds 00:05:36.645 23:20:59 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:05:36.645 00:05:36.645 00:05:36.645 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.645 http://cunit.sourceforge.net/ 00:05:36.645 00:05:36.645 00:05:36.645 Suite: ftl_sb 00:05:36.645 Test: test_sb_crc_v2 ...passed 00:05:36.645 Test: test_sb_crc_v3 ...passed 00:05:36.645 Test: test_sb_v3_md_layout ...passed 00:05:36.645 Test: test_sb_v5_md_layout ...passed 00:05:36.645 00:05:36.645 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.645 suites 1 1 n/a 0 0 00:05:36.645 tests 4 4 4 0 0 00:05:36.645 asserts 148 148 148 0 n/a 00:05:36.645 00:05:36.645 Elapsed time = 0.000 seconds 00:05:36.645 [2024-05-14 23:20:59.707427] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:05:36.645 [2024-05-14 23:20:59.707679] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:05:36.645 [2024-05-14 23:20:59.707721] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:05:36.645 [2024-05-14 23:20:59.707747] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:05:36.645 [2024-05-14 23:20:59.707768] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:05:36.645 [2024-05-14 23:20:59.707833] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:05:36.645 [2024-05-14 23:20:59.707853] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:05:36.645 [2024-05-14 23:20:59.707888] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:05:36.645 [2024-05-14 23:20:59.707935] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:05:36.645 [2024-05-14 23:20:59.707958] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:05:36.645 [2024-05-14 23:20:59.707976] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:05:36.645 23:20:59 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:05:36.645 00:05:36.645 00:05:36.645 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.645 http://cunit.sourceforge.net/ 00:05:36.645 00:05:36.645 00:05:36.645 Suite: ftl_layout_upgrade 00:05:36.645 Test: test_l2p_upgrade ...passed 00:05:36.645 00:05:36.645 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.645 suites 1 1 n/a 0 0 00:05:36.645 tests 1 1 1 0 0 00:05:36.645 asserts 140 140 140 0 n/a 00:05:36.645 00:05:36.645 Elapsed time = 0.000 seconds 00:05:36.645 ************************************ 00:05:36.645 END TEST unittest_ftl 00:05:36.645 ************************************ 00:05:36.645 00:05:36.645 real 0m0.429s 00:05:36.645 user 0m0.174s 00:05:36.645 sys 0m0.257s 00:05:36.645 23:20:59 unittest.unittest_ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.645 23:20:59 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:05:36.645 23:20:59 unittest -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:05:36.645 23:20:59 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.645 23:20:59 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.645 23:20:59 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.645 ************************************ 00:05:36.645 START TEST unittest_accel 00:05:36.645 ************************************ 00:05:36.645 23:20:59 unittest.unittest_accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:05:36.645 00:05:36.645 00:05:36.645 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.645 http://cunit.sourceforge.net/ 00:05:36.645 00:05:36.645 00:05:36.645 Suite: accel_sequence 00:05:36.645 Test: test_sequence_fill_copy ...passed 00:05:36.645 Test: test_sequence_abort ...passed 00:05:36.645 Test: test_sequence_append_error ...passed 00:05:36.645 Test: test_sequence_completion_error ...passed 00:05:36.645 Test: test_sequence_copy_elision ...[2024-05-14 23:20:59.803723] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1901:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f73f3d357c0 00:05:36.645 [2024-05-14 23:20:59.803950] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1901:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f73f3d357c0 00:05:36.645 [2024-05-14 23:20:59.804019] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1811:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f73f3d357c0 00:05:36.645 [2024-05-14 23:20:59.804060] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1811:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f73f3d357c0 00:05:36.645 passed 00:05:36.645 Test: test_sequence_accel_buffers ...passed 00:05:36.645 Test: test_sequence_memory_domain ...passed 00:05:36.645 Test: test_sequence_module_memory_domain ...[2024-05-14 23:20:59.807825] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1703:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:05:36.645 [2024-05-14 23:20:59.807924] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1742:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:05:36.645 passed 00:05:36.645 Test: test_sequence_driver ...passed 00:05:36.646 Test: test_sequence_same_iovs ...passed 00:05:36.646 Test: test_sequence_crc32 ...[2024-05-14 23:20:59.810308] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1850:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f73f321f7c0 using driver: ut 00:05:36.646 [2024-05-14 23:20:59.810380] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1914:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f73f321f7c0 through driver: ut 00:05:36.646 passed 00:05:36.646 Suite: accel 00:05:36.646 Test: test_spdk_accel_task_complete ...passed 00:05:36.646 Test: test_get_task ...passed 00:05:36.646 Test: test_spdk_accel_submit_copy ...passed 00:05:36.646 Test: test_spdk_accel_submit_dualcast ...passed 00:05:36.646 Test: test_spdk_accel_submit_compare ...passed 00:05:36.646 Test: test_spdk_accel_submit_fill ...passed 00:05:36.646 Test: test_spdk_accel_submit_crc32c ...passed 00:05:36.646 Test: test_spdk_accel_submit_crc32cv ...passed 00:05:36.646 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:05:36.646 Test: test_spdk_accel_submit_xor ...passed 00:05:36.646 Test: test_spdk_accel_module_find_by_name ...passed 00:05:36.646 Test: test_spdk_accel_module_register ...[2024-05-14 23:20:59.813287] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:05:36.646 [2024-05-14 23:20:59.813335] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:05:36.646 passed 00:05:36.646 00:05:36.646 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.646 suites 2 2 n/a 0 0 00:05:36.646 tests 23 23 23 0 0 00:05:36.646 asserts 750 750 750 0 n/a 00:05:36.646 00:05:36.646 Elapsed time = 0.010 seconds 00:05:36.646 ************************************ 00:05:36.646 END TEST unittest_accel 00:05:36.646 ************************************ 00:05:36.646 00:05:36.646 real 0m0.046s 00:05:36.646 user 0m0.017s 00:05:36.646 sys 0m0.029s 00:05:36.646 23:20:59 unittest.unittest_accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.646 23:20:59 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.646 23:20:59 unittest -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:05:36.646 23:20:59 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.646 23:20:59 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.646 23:20:59 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.646 ************************************ 00:05:36.646 START TEST unittest_ioat 00:05:36.646 ************************************ 00:05:36.646 23:20:59 unittest.unittest_ioat -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:05:36.646 00:05:36.646 00:05:36.646 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.646 http://cunit.sourceforge.net/ 00:05:36.646 00:05:36.646 00:05:36.646 Suite: ioat 00:05:36.646 Test: ioat_state_check ...passed 00:05:36.646 00:05:36.646 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.646 suites 1 1 n/a 0 0 00:05:36.646 tests 1 1 1 0 0 00:05:36.646 asserts 32 32 32 0 n/a 00:05:36.646 00:05:36.646 Elapsed time = 0.000 seconds 00:05:36.646 00:05:36.646 real 0m0.026s 00:05:36.646 user 0m0.011s 00:05:36.646 sys 0m0.015s 00:05:36.646 23:20:59 unittest.unittest_ioat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.646 23:20:59 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:05:36.646 ************************************ 00:05:36.646 END TEST unittest_ioat 00:05:36.646 ************************************ 00:05:36.646 23:20:59 unittest -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:36.905 23:20:59 unittest -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:05:36.905 23:20:59 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.905 23:20:59 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.905 23:20:59 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.905 ************************************ 00:05:36.905 START TEST unittest_idxd_user 00:05:36.905 ************************************ 00:05:36.905 23:20:59 unittest.unittest_idxd_user -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:05:36.905 00:05:36.905 00:05:36.905 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.905 http://cunit.sourceforge.net/ 00:05:36.905 00:05:36.905 00:05:36.905 Suite: idxd_user 00:05:36.905 Test: test_idxd_wait_cmd ...[2024-05-14 23:20:59.961273] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:05:36.905 passed 00:05:36.905 Test: test_idxd_reset_dev ...passed 00:05:36.905 Test: test_idxd_group_config ...passed 00:05:36.905 Test: test_idxd_wq_config ...passed 00:05:36.905 00:05:36.905 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.905 suites 1 1 n/a 0 0 00:05:36.905 tests 4 4 4 0 0 00:05:36.905 asserts 20 20 20 0 n/a 00:05:36.905 00:05:36.905 Elapsed time = 0.000 seconds 00:05:36.906 [2024-05-14 23:20:59.961528] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:05:36.906 [2024-05-14 23:20:59.961628] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:05:36.906 [2024-05-14 23:20:59.961669] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:05:36.906 00:05:36.906 real 0m0.028s 00:05:36.906 user 0m0.013s 00:05:36.906 sys 0m0.015s 00:05:36.906 23:20:59 unittest.unittest_idxd_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.906 ************************************ 00:05:36.906 END TEST unittest_idxd_user 00:05:36.906 ************************************ 00:05:36.906 23:20:59 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:05:36.906 23:21:00 unittest -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:05:36.906 23:21:00 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.906 23:21:00 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.906 23:21:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:36.906 ************************************ 00:05:36.906 START TEST unittest_iscsi 00:05:36.906 ************************************ 00:05:36.906 23:21:00 unittest.unittest_iscsi -- common/autotest_common.sh@1121 -- # unittest_iscsi 00:05:36.906 23:21:00 unittest.unittest_iscsi -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:05:36.906 00:05:36.906 00:05:36.906 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.906 http://cunit.sourceforge.net/ 00:05:36.906 00:05:36.906 00:05:36.906 Suite: conn_suite 00:05:36.906 Test: read_task_split_in_order_case ...passed 00:05:36.906 Test: read_task_split_reverse_order_case ...passed 00:05:36.906 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:05:36.906 Test: process_non_read_task_completion_test ...passed 00:05:36.906 Test: free_tasks_on_connection ...passed 00:05:36.906 Test: free_tasks_with_queued_datain ...passed 00:05:36.906 Test: abort_queued_datain_task_test ...passed 00:05:36.906 Test: abort_queued_datain_tasks_test ...passed 00:05:36.906 00:05:36.906 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.906 suites 1 1 n/a 0 0 00:05:36.906 tests 8 8 8 0 0 00:05:36.906 asserts 230 230 230 0 n/a 00:05:36.906 00:05:36.906 Elapsed time = 0.000 seconds 00:05:36.906 23:21:00 unittest.unittest_iscsi -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:05:36.906 00:05:36.906 00:05:36.906 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.906 http://cunit.sourceforge.net/ 00:05:36.906 00:05:36.906 00:05:36.906 Suite: iscsi_suite 00:05:36.906 Test: param_negotiation_test ...passed 00:05:36.906 Test: list_negotiation_test ...passed 00:05:36.906 Test: parse_valid_test ...passed 00:05:36.906 Test: parse_invalid_test ...[2024-05-14 23:21:00.063830] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:05:36.906 [2024-05-14 23:21:00.064044] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:05:36.906 [2024-05-14 23:21:00.064092] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:05:36.906 [2024-05-14 23:21:00.064164] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:05:36.906 [2024-05-14 23:21:00.064313] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:05:36.906 [2024-05-14 23:21:00.064393] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:05:36.906 passed 00:05:36.906 00:05:36.906 [2024-05-14 23:21:00.064468] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:05:36.906 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.906 suites 1 1 n/a 0 0 00:05:36.906 tests 4 4 4 0 0 00:05:36.906 asserts 161 161 161 0 n/a 00:05:36.906 00:05:36.906 Elapsed time = 0.000 seconds 00:05:36.906 23:21:00 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:05:36.906 00:05:36.906 00:05:36.906 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.906 http://cunit.sourceforge.net/ 00:05:36.906 00:05:36.906 00:05:36.906 Suite: iscsi_target_node_suite 00:05:36.906 Test: add_lun_test_cases ...[2024-05-14 23:21:00.087280] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:05:36.906 [2024-05-14 23:21:00.087500] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:05:36.906 passed 00:05:36.906 Test: allow_any_allowed ...passed 00:05:36.906 Test: allow_ipv6_allowed ...passed 00:05:36.906 Test: allow_ipv6_denied ...passed 00:05:36.906 Test: allow_ipv6_invalid ...passed 00:05:36.906 Test: allow_ipv4_allowed ...passed 00:05:36.906 Test: allow_ipv4_denied ...passed 00:05:36.906 Test: allow_ipv4_invalid ...passed 00:05:36.906 Test: node_access_allowed ...[2024-05-14 23:21:00.087576] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:05:36.906 [2024-05-14 23:21:00.087623] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:05:36.906 [2024-05-14 23:21:00.087646] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:05:36.906 passed 00:05:36.906 Test: node_access_denied_by_empty_netmask ...passed 00:05:36.906 Test: node_access_multi_initiator_groups_cases ...passed 00:05:36.906 Test: allow_iscsi_name_multi_maps_case ...passed 00:05:36.906 Test: chap_param_test_cases ...[2024-05-14 23:21:00.087934] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:05:36.906 passed 00:05:36.906 00:05:36.906 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.906 suites 1 1 n/a 0 0 00:05:36.906 tests 13 13 13 0 0 00:05:36.906 asserts 50 50 50 0 n/a 00:05:36.906 00:05:36.906 Elapsed time = 0.000 seconds 00:05:36.906 [2024-05-14 23:21:00.087972] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:05:36.906 [2024-05-14 23:21:00.088027] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:05:36.906 [2024-05-14 23:21:00.088053] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:05:36.906 [2024-05-14 23:21:00.088081] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:05:36.906 23:21:00 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:05:36.906 00:05:36.906 00:05:36.906 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.906 http://cunit.sourceforge.net/ 00:05:36.906 00:05:36.906 00:05:36.906 Suite: iscsi_suite 00:05:36.906 Test: op_login_check_target_test ...passed 00:05:36.906 Test: op_login_session_normal_test ...[2024-05-14 23:21:00.114027] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:05:36.906 [2024-05-14 23:21:00.114653] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:36.906 [2024-05-14 23:21:00.114703] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:36.906 [2024-05-14 23:21:00.114740] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:05:36.906 [2024-05-14 23:21:00.114785] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:05:36.906 [2024-05-14 23:21:00.115165] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:05:36.906 [2024-05-14 23:21:00.115317] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:05:36.906 [2024-05-14 23:21:00.115588] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:05:36.906 passed 00:05:36.906 Test: maxburstlength_test ...[2024-05-14 23:21:00.116056] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:05:36.906 passed 00:05:36.906 Test: underflow_for_read_transfer_test ...[2024-05-14 23:21:00.116116] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4554:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:05:36.906 passed 00:05:36.906 Test: underflow_for_zero_read_transfer_test ...passed 00:05:36.906 Test: underflow_for_request_sense_test ...passed 00:05:36.906 Test: underflow_for_check_condition_test ...passed 00:05:36.906 Test: add_transfer_task_test ...passed 00:05:36.906 Test: get_transfer_task_test ...passed 00:05:36.906 Test: del_transfer_task_test ...passed 00:05:36.906 Test: clear_all_transfer_tasks_test ...passed 00:05:36.906 Test: build_iovs_test ...passed 00:05:36.906 Test: build_iovs_with_md_test ...passed 00:05:36.906 Test: pdu_hdr_op_login_test ...[2024-05-14 23:21:00.117761] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:05:36.906 [2024-05-14 23:21:00.118043] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:05:36.906 passed 00:05:36.906 Test: pdu_hdr_op_text_test ...[2024-05-14 23:21:00.118103] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:05:36.906 [2024-05-14 23:21:00.118438] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2246:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:05:36.906 [2024-05-14 23:21:00.118524] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:05:36.906 [2024-05-14 23:21:00.118580] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2291:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:05:36.906 passed 00:05:36.906 Test: pdu_hdr_op_logout_test ...[2024-05-14 23:21:00.119038] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2521:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:05:36.906 passed 00:05:36.906 Test: pdu_hdr_op_scsi_test ...[2024-05-14 23:21:00.119251] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:05:36.906 [2024-05-14 23:21:00.119519] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:05:36.906 [2024-05-14 23:21:00.119667] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:05:36.906 [2024-05-14 23:21:00.119851] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3403:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:05:36.907 [2024-05-14 23:21:00.120011] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3410:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:05:36.907 [2024-05-14 23:21:00.120175] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:05:36.907 passed 00:05:36.907 Test: pdu_hdr_op_task_mgmt_test ...[2024-05-14 23:21:00.120487] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:05:36.907 [2024-05-14 23:21:00.120529] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:05:36.907 passed 00:05:36.907 Test: pdu_hdr_op_nopout_test ...[2024-05-14 23:21:00.120858] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:05:36.907 [2024-05-14 23:21:00.120918] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:05:36.907 [2024-05-14 23:21:00.121196] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:05:36.907 passed 00:05:36.907 Test: pdu_hdr_op_data_test ...[2024-05-14 23:21:00.121248] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:05:36.907 [2024-05-14 23:21:00.121288] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:05:36.907 [2024-05-14 23:21:00.121347] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:05:36.907 [2024-05-14 23:21:00.121631] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:05:36.907 [2024-05-14 23:21:00.121682] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:05:36.907 [2024-05-14 23:21:00.121918] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:05:36.907 [2024-05-14 23:21:00.121965] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:05:36.907 [2024-05-14 23:21:00.121996] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4249:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:05:36.907 passed 00:05:36.907 Test: empty_text_with_cbit_test ...passed 00:05:36.907 Test: pdu_payload_read_test ...[2024-05-14 23:21:00.123509] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4637:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:05:36.907 passed 00:05:36.907 Test: data_out_pdu_sequence_test ...passed 00:05:36.907 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:05:36.907 00:05:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.907 suites 1 1 n/a 0 0 00:05:36.907 tests 24 24 24 0 0 00:05:36.907 asserts 150253 150253 150253 0 n/a 00:05:36.907 00:05:36.907 Elapsed time = 0.020 seconds 00:05:36.907 23:21:00 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:05:36.907 00:05:36.907 00:05:36.907 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.907 http://cunit.sourceforge.net/ 00:05:36.907 00:05:36.907 00:05:36.907 Suite: init_grp_suite 00:05:36.907 Test: create_initiator_group_success_case ...passed 00:05:36.907 Test: find_initiator_group_success_case ...passed 00:05:36.907 Test: register_initiator_group_twice_case ...passed 00:05:36.907 Test: add_initiator_name_success_case ...passed 00:05:36.907 Test: add_initiator_name_fail_case ...[2024-05-14 23:21:00.150605] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:05:36.907 passed 00:05:36.907 Test: delete_all_initiator_names_success_case ...passed 00:05:36.907 Test: add_netmask_success_case ...passed 00:05:36.907 Test: add_netmask_fail_case ...passed 00:05:36.907 Test: delete_all_netmasks_success_case ...[2024-05-14 23:21:00.151237] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:05:36.907 passed 00:05:36.907 Test: initiator_name_overwrite_all_to_any_case ...passed 00:05:36.907 Test: netmask_overwrite_all_to_any_case ...passed 00:05:36.907 Test: add_delete_initiator_names_case ...passed 00:05:36.907 Test: add_duplicated_initiator_names_case ...passed 00:05:36.907 Test: delete_nonexisting_initiator_names_case ...passed 00:05:36.907 Test: add_delete_netmasks_case ...passed 00:05:36.907 Test: add_duplicated_netmasks_case ...passed 00:05:36.907 Test: delete_nonexisting_netmasks_case ...passed 00:05:36.907 00:05:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.907 suites 1 1 n/a 0 0 00:05:36.907 tests 17 17 17 0 0 00:05:36.907 asserts 108 108 108 0 n/a 00:05:36.907 00:05:36.907 Elapsed time = 0.000 seconds 00:05:36.907 23:21:00 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:05:36.907 00:05:36.907 00:05:36.907 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.907 http://cunit.sourceforge.net/ 00:05:36.907 00:05:36.907 00:05:36.907 Suite: portal_grp_suite 00:05:36.907 Test: portal_create_ipv4_normal_case ...passed 00:05:36.907 Test: portal_create_ipv6_normal_case ...passed 00:05:36.907 Test: portal_create_ipv4_wildcard_case ...passed 00:05:36.907 Test: portal_create_ipv6_wildcard_case ...passed 00:05:36.907 Test: portal_create_twice_case ...[2024-05-14 23:21:00.176606] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:05:36.907 passed 00:05:36.907 Test: portal_grp_register_unregister_case ...passed 00:05:36.907 Test: portal_grp_register_twice_case ...passed 00:05:36.907 Test: portal_grp_add_delete_case ...passed 00:05:36.907 Test: portal_grp_add_delete_twice_case ...passed 00:05:36.907 00:05:36.907 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.907 suites 1 1 n/a 0 0 00:05:36.907 tests 9 9 9 0 0 00:05:36.907 asserts 44 44 44 0 n/a 00:05:36.907 00:05:36.907 Elapsed time = 0.000 seconds 00:05:36.907 00:05:36.907 real 0m0.168s 00:05:36.907 user 0m0.086s 00:05:36.907 sys 0m0.084s 00:05:36.907 23:21:00 unittest.unittest_iscsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.907 23:21:00 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:05:36.907 ************************************ 00:05:36.907 END TEST unittest_iscsi 00:05:36.907 ************************************ 00:05:37.166 23:21:00 unittest -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.166 ************************************ 00:05:37.166 START TEST unittest_json 00:05:37.166 ************************************ 00:05:37.166 23:21:00 unittest.unittest_json -- common/autotest_common.sh@1121 -- # unittest_json 00:05:37.166 23:21:00 unittest.unittest_json -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:05:37.166 00:05:37.166 00:05:37.166 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.166 http://cunit.sourceforge.net/ 00:05:37.166 00:05:37.166 00:05:37.166 Suite: json 00:05:37.166 Test: test_parse_literal ...passed 00:05:37.166 Test: test_parse_string_simple ...passed 00:05:37.166 Test: test_parse_string_control_chars ...passed 00:05:37.166 Test: test_parse_string_utf8 ...passed 00:05:37.166 Test: test_parse_string_escapes_twochar ...passed 00:05:37.166 Test: test_parse_string_escapes_unicode ...passed 00:05:37.166 Test: test_parse_number ...passed 00:05:37.166 Test: test_parse_array ...passed 00:05:37.166 Test: test_parse_object ...passed 00:05:37.166 Test: test_parse_nesting ...passed 00:05:37.166 Test: test_parse_comment ...passed 00:05:37.166 00:05:37.166 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.166 suites 1 1 n/a 0 0 00:05:37.166 tests 11 11 11 0 0 00:05:37.166 asserts 1516 1516 1516 0 n/a 00:05:37.166 00:05:37.166 Elapsed time = 0.010 seconds 00:05:37.166 23:21:00 unittest.unittest_json -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:05:37.166 00:05:37.166 00:05:37.166 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.166 http://cunit.sourceforge.net/ 00:05:37.166 00:05:37.166 00:05:37.166 Suite: json 00:05:37.166 Test: test_strequal ...passed 00:05:37.166 Test: test_num_to_uint16 ...passed 00:05:37.166 Test: test_num_to_int32 ...passed 00:05:37.166 Test: test_num_to_uint64 ...passed 00:05:37.166 Test: test_decode_object ...passed 00:05:37.166 Test: test_decode_array ...passed 00:05:37.166 Test: test_decode_bool ...passed 00:05:37.166 Test: test_decode_uint16 ...passed 00:05:37.166 Test: test_decode_int32 ...passed 00:05:37.166 Test: test_decode_uint32 ...passed 00:05:37.166 Test: test_decode_uint64 ...passed 00:05:37.166 Test: test_decode_string ...passed 00:05:37.166 Test: test_decode_uuid ...passed 00:05:37.166 Test: test_find ...passed 00:05:37.166 Test: test_find_array ...passed 00:05:37.166 Test: test_iterating ...passed 00:05:37.166 Test: test_free_object ...passed 00:05:37.166 00:05:37.166 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.166 suites 1 1 n/a 0 0 00:05:37.166 tests 17 17 17 0 0 00:05:37.166 asserts 236 236 236 0 n/a 00:05:37.166 00:05:37.166 Elapsed time = 0.000 seconds 00:05:37.166 23:21:00 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:05:37.166 00:05:37.166 00:05:37.166 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.166 http://cunit.sourceforge.net/ 00:05:37.166 00:05:37.166 00:05:37.166 Suite: json 00:05:37.166 Test: test_write_literal ...passed 00:05:37.166 Test: test_write_string_simple ...passed 00:05:37.166 Test: test_write_string_escapes ...passed 00:05:37.166 Test: test_write_string_utf16le ...passed 00:05:37.166 Test: test_write_number_int32 ...passed 00:05:37.166 Test: test_write_number_uint32 ...passed 00:05:37.166 Test: test_write_number_uint128 ...passed 00:05:37.166 Test: test_write_string_number_uint128 ...passed 00:05:37.166 Test: test_write_number_int64 ...passed 00:05:37.166 Test: test_write_number_uint64 ...passed 00:05:37.166 Test: test_write_number_double ...passed 00:05:37.166 Test: test_write_uuid ...passed 00:05:37.166 Test: test_write_array ...passed 00:05:37.166 Test: test_write_object ...passed 00:05:37.166 Test: test_write_nesting ...passed 00:05:37.166 Test: test_write_val ...passed 00:05:37.166 00:05:37.166 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.166 suites 1 1 n/a 0 0 00:05:37.166 tests 16 16 16 0 0 00:05:37.166 asserts 918 918 918 0 n/a 00:05:37.166 00:05:37.166 Elapsed time = 0.000 seconds 00:05:37.166 23:21:00 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:05:37.166 00:05:37.166 00:05:37.166 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.166 http://cunit.sourceforge.net/ 00:05:37.166 00:05:37.166 00:05:37.166 Suite: jsonrpc 00:05:37.166 Test: test_parse_request ...passed 00:05:37.166 Test: test_parse_request_streaming ...passed 00:05:37.166 00:05:37.166 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.166 suites 1 1 n/a 0 0 00:05:37.166 tests 2 2 2 0 0 00:05:37.166 asserts 289 289 289 0 n/a 00:05:37.166 00:05:37.166 Elapsed time = 0.000 seconds 00:05:37.166 00:05:37.166 real 0m0.111s 00:05:37.166 user 0m0.058s 00:05:37.166 sys 0m0.054s 00:05:37.166 23:21:00 unittest.unittest_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.166 23:21:00 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.166 ************************************ 00:05:37.166 END TEST unittest_json 00:05:37.166 ************************************ 00:05:37.166 23:21:00 unittest -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.166 ************************************ 00:05:37.166 START TEST unittest_rpc 00:05:37.166 ************************************ 00:05:37.166 23:21:00 unittest.unittest_rpc -- common/autotest_common.sh@1121 -- # unittest_rpc 00:05:37.166 23:21:00 unittest.unittest_rpc -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:05:37.166 00:05:37.166 00:05:37.166 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.166 http://cunit.sourceforge.net/ 00:05:37.166 00:05:37.166 00:05:37.166 Suite: rpc 00:05:37.166 Test: test_jsonrpc_handler ...passed 00:05:37.166 Test: test_spdk_rpc_is_method_allowed ...passed 00:05:37.166 Test: test_rpc_get_methods ...[2024-05-14 23:21:00.399027] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:05:37.166 passed 00:05:37.166 Test: test_rpc_spdk_get_version ...passed 00:05:37.166 Test: test_spdk_rpc_listen_close ...passed 00:05:37.166 Test: test_rpc_run_multiple_servers ...passed 00:05:37.166 00:05:37.166 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.166 suites 1 1 n/a 0 0 00:05:37.166 tests 6 6 6 0 0 00:05:37.166 asserts 23 23 23 0 n/a 00:05:37.166 00:05:37.166 Elapsed time = 0.000 seconds 00:05:37.166 00:05:37.166 real 0m0.028s 00:05:37.166 user 0m0.014s 00:05:37.166 sys 0m0.014s 00:05:37.166 ************************************ 00:05:37.166 END TEST unittest_rpc 00:05:37.166 ************************************ 00:05:37.166 23:21:00 unittest.unittest_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.166 23:21:00 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.166 23:21:00 unittest -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.166 23:21:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.427 ************************************ 00:05:37.427 START TEST unittest_notify 00:05:37.427 ************************************ 00:05:37.427 23:21:00 unittest.unittest_notify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:05:37.427 00:05:37.427 00:05:37.427 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.427 http://cunit.sourceforge.net/ 00:05:37.427 00:05:37.427 00:05:37.427 Suite: app_suite 00:05:37.427 Test: notify ...passed 00:05:37.427 00:05:37.427 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.427 suites 1 1 n/a 0 0 00:05:37.427 tests 1 1 1 0 0 00:05:37.427 asserts 13 13 13 0 n/a 00:05:37.427 00:05:37.427 Elapsed time = 0.000 seconds 00:05:37.427 00:05:37.427 real 0m0.023s 00:05:37.427 user 0m0.014s 00:05:37.427 sys 0m0.009s 00:05:37.427 23:21:00 unittest.unittest_notify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.427 23:21:00 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:05:37.427 ************************************ 00:05:37.427 END TEST unittest_notify 00:05:37.427 ************************************ 00:05:37.427 23:21:00 unittest -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:05:37.427 23:21:00 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.427 23:21:00 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.427 23:21:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:37.427 ************************************ 00:05:37.427 START TEST unittest_nvme 00:05:37.427 ************************************ 00:05:37.427 23:21:00 unittest.unittest_nvme -- common/autotest_common.sh@1121 -- # unittest_nvme 00:05:37.427 23:21:00 unittest.unittest_nvme -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:05:37.427 00:05:37.427 00:05:37.427 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.427 http://cunit.sourceforge.net/ 00:05:37.427 00:05:37.427 00:05:37.427 Suite: nvme 00:05:37.427 Test: test_opc_data_transfer ...passed 00:05:37.427 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:05:37.427 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:05:37.427 Test: test_trid_parse_and_compare ...[2024-05-14 23:21:00.537506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:05:37.427 [2024-05-14 23:21:00.538142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:37.427 [2024-05-14 23:21:00.538269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1188:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:05:37.427 [2024-05-14 23:21:00.538308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:37.428 [2024-05-14 23:21:00.538338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:05:37.428 [2024-05-14 23:21:00.538693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:05:37.428 passed 00:05:37.428 Test: test_trid_trtype_str ...passed 00:05:37.428 Test: test_trid_adrfam_str ...passed 00:05:37.428 Test: test_nvme_ctrlr_probe ...[2024-05-14 23:21:00.539288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:05:37.428 passed 00:05:37.428 Test: test_spdk_nvme_probe ...[2024-05-14 23:21:00.539379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:37.428 [2024-05-14 23:21:00.539411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:05:37.428 [2024-05-14 23:21:00.539514] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:05:37.428 [2024-05-14 23:21:00.539540] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:05:37.428 passed 00:05:37.428 Test: test_spdk_nvme_connect ...[2024-05-14 23:21:00.539808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:05:37.428 [2024-05-14 23:21:00.540224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:37.428 [2024-05-14 23:21:00.540313] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:05:37.428 passed 00:05:37.428 Test: test_nvme_ctrlr_probe_internal ...[2024-05-14 23:21:00.540443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:05:37.428 [2024-05-14 23:21:00.540684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:05:37.428 passed 00:05:37.428 Test: test_nvme_init_controllers ...[2024-05-14 23:21:00.540777] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:05:37.428 passed 00:05:37.428 Test: test_nvme_driver_init ...[2024-05-14 23:21:00.541063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:05:37.428 [2024-05-14 23:21:00.541094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:05:37.428 [2024-05-14 23:21:00.654815] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:05:37.428 passed 00:05:37.428 Test: test_spdk_nvme_detach ...[2024-05-14 23:21:00.655074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:05:37.428 passed 00:05:37.428 Test: test_nvme_completion_poll_cb ...passed 00:05:37.428 Test: test_nvme_user_copy_cmd_complete ...passed 00:05:37.428 Test: test_nvme_allocate_request_null ...passed 00:05:37.428 Test: test_nvme_allocate_request ...passed 00:05:37.428 Test: test_nvme_free_request ...passed 00:05:37.428 Test: test_nvme_allocate_request_user_copy ...passed 00:05:37.428 Test: test_nvme_robust_mutex_init_shared ...passed 00:05:37.428 Test: test_nvme_request_check_timeout ...passed 00:05:37.428 Test: test_nvme_wait_for_completion ...passed 00:05:37.428 Test: test_spdk_nvme_parse_func ...passed 00:05:37.428 Test: test_spdk_nvme_detach_async ...passed 00:05:37.428 Test: test_nvme_parse_addr ...[2024-05-14 23:21:00.657205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:05:37.428 passed 00:05:37.428 00:05:37.428 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.428 suites 1 1 n/a 0 0 00:05:37.428 tests 25 25 25 0 0 00:05:37.428 asserts 326 326 326 0 n/a 00:05:37.428 00:05:37.428 Elapsed time = 0.000 seconds 00:05:37.428 23:21:00 unittest.unittest_nvme -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:05:37.428 00:05:37.428 00:05:37.428 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.428 http://cunit.sourceforge.net/ 00:05:37.428 00:05:37.428 00:05:37.428 Suite: nvme_ctrlr 00:05:37.428 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-05-14 23:21:00.684322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 passed 00:05:37.428 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-05-14 23:21:00.686314] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 passed 00:05:37.428 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-05-14 23:21:00.687549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 passed 00:05:37.428 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-05-14 23:21:00.688808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 passed 00:05:37.428 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-05-14 23:21:00.690100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 [2024-05-14 23:21:00.691261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-14 23:21:00.692462] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-14 23:21:00.693607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:37.428 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-05-14 23:21:00.695977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 [2024-05-14 23:21:00.698342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-14 23:21:00.699571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:37.428 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-05-14 23:21:00.702004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 [2024-05-14 23:21:00.703247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-14 23:21:00.705689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:05:37.428 Test: test_nvme_ctrlr_init_delay ...[2024-05-14 23:21:00.708225] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 passed 00:05:37.428 Test: test_alloc_io_qpair_rr_1 ...[2024-05-14 23:21:00.710009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 [2024-05-14 23:21:00.710571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:05:37.428 [2024-05-14 23:21:00.710903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:37.428 [2024-05-14 23:21:00.711130] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:37.428 passed 00:05:37.428 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-05-14 23:21:00.711197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:05:37.428 passed 00:05:37.428 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:05:37.428 Test: test_alloc_io_qpair_wrr_1 ...[2024-05-14 23:21:00.711761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 passed 00:05:37.428 Test: test_alloc_io_qpair_wrr_2 ...[2024-05-14 23:21:00.712220] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.428 [2024-05-14 23:21:00.712343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:05:37.705 passed 00:05:37.705 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-05-14 23:21:00.712849] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4858:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:05:37.705 [2024-05-14 23:21:00.713392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:05:37.705 [2024-05-14 23:21:00.713479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4935:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:05:37.705 [2024-05-14 23:21:00.713524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_fail ...[2024-05-14 23:21:00.713909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:05:37.705 Test: test_nvme_ctrlr_set_supported_features ...passed 00:05:37.705 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:05:37.705 Test: test_nvme_ctrlr_test_active_ns ...[2024-05-14 23:21:00.714878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:05:37.705 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:05:37.705 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:05:37.705 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-05-14 23:21:00.880795] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-05-14 23:21:00.887965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-05-14 23:21:00.889225] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 [2024-05-14 23:21:00.889303] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2883:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:05:37.705 passed 00:05:37.705 Test: test_alloc_io_qpair_fail ...[2024-05-14 23:21:00.890490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 [2024-05-14 23:21:00.890641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_add_remove_process ...passed 00:05:37.705 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:05:37.705 Test: test_nvme_ctrlr_set_state ...[2024-05-14 23:21:00.891133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-05-14 23:21:00.891714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-05-14 23:21:00.911922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_ns_mgmt ...[2024-05-14 23:21:00.949667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_reset ...[2024-05-14 23:21:00.951442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_aer_callback ...[2024-05-14 23:21:00.951990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-05-14 23:21:00.953637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:05:37.705 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:05:37.705 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-05-14 23:21:00.955485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:05:37.705 Test: test_nvme_ctrlr_ana_resize ...[2024-05-14 23:21:00.957117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:05:37.705 Test: test_nvme_transport_ctrlr_ready ...[2024-05-14 23:21:00.958940] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4029:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:05:37.705 [2024-05-14 23:21:00.959006] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4080:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:05:37.705 passed 00:05:37.705 Test: test_nvme_ctrlr_disable ...[2024-05-14 23:21:00.959065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4148:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:05:37.705 passed 00:05:37.705 00:05:37.705 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.705 suites 1 1 n/a 0 0 00:05:37.705 tests 43 43 43 0 0 00:05:37.705 asserts 10418 10418 10418 0 n/a 00:05:37.705 00:05:37.705 Elapsed time = 0.230 seconds 00:05:37.705 23:21:00 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:05:37.964 00:05:37.964 00:05:37.964 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.964 http://cunit.sourceforge.net/ 00:05:37.964 00:05:37.964 00:05:37.964 Suite: nvme_ctrlr_cmd 00:05:37.964 Test: test_get_log_pages ...passed 00:05:37.964 Test: test_set_feature_cmd ...passed 00:05:37.964 Test: test_set_feature_ns_cmd ...passed 00:05:37.964 Test: test_get_feature_cmd ...passed 00:05:37.964 Test: test_get_feature_ns_cmd ...passed 00:05:37.964 Test: test_abort_cmd ...passed 00:05:37.964 Test: test_set_host_id_cmds ...passed 00:05:37.964 Test: test_io_cmd_raw_no_payload_build ...passed 00:05:37.964 Test: test_io_raw_cmd ...passed 00:05:37.964 Test: test_io_raw_cmd_with_md ...passed 00:05:37.964 Test: test_namespace_attach ...passed 00:05:37.964 Test: test_namespace_detach ...passed 00:05:37.964 Test: test_namespace_create ...passed 00:05:37.964 Test: test_namespace_delete ...passed 00:05:37.964 Test: test_doorbell_buffer_config ...passed 00:05:37.964 Test: test_format_nvme ...passed 00:05:37.964 Test: test_fw_commit ...passed 00:05:37.964 Test: test_fw_image_download ...passed 00:05:37.964 Test: test_sanitize ...passed 00:05:37.964 Test: test_directive ...passed 00:05:37.964 Test: test_nvme_request_add_abort ...passed 00:05:37.964 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:05:37.964 Test: test_nvme_ctrlr_cmd_identify ...passed 00:05:37.964 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:05:37.964 00:05:37.964 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.964 suites 1 1 n/a 0 0 00:05:37.964 tests 24 24 24 0 0 00:05:37.964 asserts 198 198 198 0 n/a 00:05:37.964 00:05:37.964 Elapsed time = 0.000 seconds 00:05:37.964 [2024-05-14 23:21:01.004676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:05:37.964 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:05:37.964 00:05:37.964 00:05:37.964 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.964 http://cunit.sourceforge.net/ 00:05:37.964 00:05:37.964 00:05:37.964 Suite: nvme_ctrlr_cmd 00:05:37.964 Test: test_geometry_cmd ...passed 00:05:37.964 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:05:37.964 00:05:37.964 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.964 suites 1 1 n/a 0 0 00:05:37.964 tests 2 2 2 0 0 00:05:37.964 asserts 7 7 7 0 n/a 00:05:37.964 00:05:37.964 Elapsed time = 0.000 seconds 00:05:37.964 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:05:37.964 00:05:37.964 00:05:37.964 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.964 http://cunit.sourceforge.net/ 00:05:37.964 00:05:37.964 00:05:37.964 Suite: nvme 00:05:37.964 Test: test_nvme_ns_construct ...passed 00:05:37.964 Test: test_nvme_ns_uuid ...passed 00:05:37.964 Test: test_nvme_ns_csi ...passed 00:05:37.964 Test: test_nvme_ns_data ...passed 00:05:37.964 Test: test_nvme_ns_set_identify_data ...passed 00:05:37.964 Test: test_spdk_nvme_ns_get_values ...passed 00:05:37.964 Test: test_spdk_nvme_ns_is_active ...passed 00:05:37.964 Test: spdk_nvme_ns_supports ...passed 00:05:37.964 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:05:37.964 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:05:37.964 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:05:37.964 Test: test_nvme_ns_find_id_desc ...passed 00:05:37.964 00:05:37.964 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.964 suites 1 1 n/a 0 0 00:05:37.964 tests 12 12 12 0 0 00:05:37.964 asserts 83 83 83 0 n/a 00:05:37.964 00:05:37.964 Elapsed time = 0.000 seconds 00:05:37.964 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:05:37.964 00:05:37.964 00:05:37.964 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.964 http://cunit.sourceforge.net/ 00:05:37.964 00:05:37.964 00:05:37.964 Suite: nvme_ns_cmd 00:05:37.964 Test: split_test ...passed 00:05:37.964 Test: split_test2 ...passed 00:05:37.964 Test: split_test3 ...passed 00:05:37.964 Test: split_test4 ...passed 00:05:37.964 Test: test_nvme_ns_cmd_flush ...passed 00:05:37.964 Test: test_nvme_ns_cmd_dataset_management ...passed 00:05:37.964 Test: test_nvme_ns_cmd_copy ...passed 00:05:37.964 Test: test_io_flags ...passed 00:05:37.964 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:05:37.964 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:05:37.964 Test: test_nvme_ns_cmd_reservation_register ...passed 00:05:37.964 Test: test_nvme_ns_cmd_reservation_release ...passed 00:05:37.964 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:05:37.964 Test: test_nvme_ns_cmd_reservation_report ...passed 00:05:37.964 Test: test_cmd_child_request ...passed 00:05:37.964 Test: test_nvme_ns_cmd_readv ...passed 00:05:37.964 Test: test_nvme_ns_cmd_read_with_md ...[2024-05-14 23:21:01.081526] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:05:37.964 passed 00:05:37.964 Test: test_nvme_ns_cmd_writev ...passed 00:05:37.964 Test: test_nvme_ns_cmd_write_with_md ...[2024-05-14 23:21:01.082385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:05:37.964 passed 00:05:37.964 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:05:37.964 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:05:37.964 Test: test_nvme_ns_cmd_comparev ...passed 00:05:37.964 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:05:37.964 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:05:37.965 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:05:37.965 Test: test_nvme_ns_cmd_setup_request ...passed 00:05:37.965 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:05:37.965 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:05:37.965 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:05:37.965 Test: test_nvme_ns_cmd_verify ...passed 00:05:37.965 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:05:37.965 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:05:37.965 00:05:37.965 [2024-05-14 23:21:01.083656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:05:37.965 [2024-05-14 23:21:01.083749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:05:37.965 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.965 suites 1 1 n/a 0 0 00:05:37.965 tests 32 32 32 0 0 00:05:37.965 asserts 550 550 550 0 n/a 00:05:37.965 00:05:37.965 Elapsed time = 0.000 seconds 00:05:37.965 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:05:37.965 00:05:37.965 00:05:37.965 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.965 http://cunit.sourceforge.net/ 00:05:37.965 00:05:37.965 00:05:37.965 Suite: nvme_ns_cmd 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:05:37.965 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:05:37.965 00:05:37.965 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.965 suites 1 1 n/a 0 0 00:05:37.965 tests 12 12 12 0 0 00:05:37.965 asserts 123 123 123 0 n/a 00:05:37.965 00:05:37.965 Elapsed time = 0.000 seconds 00:05:37.965 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:05:37.965 00:05:37.965 00:05:37.965 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.965 http://cunit.sourceforge.net/ 00:05:37.965 00:05:37.965 00:05:37.965 Suite: nvme_qpair 00:05:37.965 Test: test3 ...passed 00:05:37.965 Test: test_ctrlr_failed ...passed 00:05:37.965 Test: struct_packing ...passed 00:05:37.965 Test: test_nvme_qpair_process_completions ...passed 00:05:37.965 Test: test_nvme_completion_is_retry ...passed 00:05:37.965 Test: test_get_status_string ...passed 00:05:37.965 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:05:37.965 Test: test_nvme_qpair_submit_request ...passed 00:05:37.965 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:05:37.965 Test: test_nvme_qpair_manual_complete_request ...passed 00:05:37.965 Test: test_nvme_qpair_init_deinit ...passed 00:05:37.965 Test: test_nvme_get_sgl_print_info ...passed 00:05:37.965 00:05:37.965 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.965 suites 1 1 n/a 0 0 00:05:37.965 tests 12 12 12 0 0 00:05:37.965 asserts 154 154 154 0 n/a 00:05:37.965 00:05:37.965 Elapsed time = 0.000 seconds 00:05:37.965 [2024-05-14 23:21:01.139881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:37.965 [2024-05-14 23:21:01.140193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:37.965 [2024-05-14 23:21:01.140261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:05:37.965 [2024-05-14 23:21:01.140347] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:05:37.965 [2024-05-14 23:21:01.140673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:05:37.965 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:05:37.965 00:05:37.965 00:05:37.965 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.965 http://cunit.sourceforge.net/ 00:05:37.965 00:05:37.965 00:05:37.965 Suite: nvme_pcie 00:05:37.965 Test: test_prp_list_append ...[2024-05-14 23:21:01.162519] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:05:37.965 [2024-05-14 23:21:01.162811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:05:37.965 [2024-05-14 23:21:01.162848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:05:37.965 passed 00:05:37.965 Test: test_nvme_pcie_hotplug_monitor ...passed 00:05:37.965 Test: test_shadow_doorbell_update ...passed 00:05:37.965 Test: test_build_contig_hw_sgl_request ...passed 00:05:37.965 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:05:37.965 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:05:37.965 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:05:37.965 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:05:37.965 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:05:37.965 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:05:37.965 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:05:37.965 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:05:37.965 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:05:37.965 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:05:37.965 00:05:37.965 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.965 suites 1 1 n/a 0 0 00:05:37.965 tests 14 14 14 0 0 00:05:37.965 asserts 235 235 235 0 n/a 00:05:37.965 00:05:37.965 Elapsed time = 0.000 seconds 00:05:37.965 [2024-05-14 23:21:01.163079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:05:37.965 [2024-05-14 23:21:01.163167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:05:37.965 [2024-05-14 23:21:01.163366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:05:37.965 [2024-05-14 23:21:01.163450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:05:37.965 [2024-05-14 23:21:01.163530] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:05:37.965 [2024-05-14 23:21:01.163587] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:05:37.965 [2024-05-14 23:21:01.163628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:05:37.965 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:05:37.965 00:05:37.965 00:05:37.965 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.965 http://cunit.sourceforge.net/ 00:05:37.965 00:05:37.965 00:05:37.965 Suite: nvme_ns_cmd 00:05:37.965 Test: nvme_poll_group_create_test ...passed 00:05:37.965 Test: nvme_poll_group_add_remove_test ...passed 00:05:37.965 Test: nvme_poll_group_process_completions ...passed 00:05:37.965 Test: nvme_poll_group_destroy_test ...passed 00:05:37.965 Test: nvme_poll_group_get_free_stats ...passed 00:05:37.965 00:05:37.965 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.965 suites 1 1 n/a 0 0 00:05:37.965 tests 5 5 5 0 0 00:05:37.965 asserts 75 75 75 0 n/a 00:05:37.965 00:05:37.965 Elapsed time = 0.000 seconds 00:05:37.965 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:05:37.965 00:05:37.965 00:05:37.965 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.965 http://cunit.sourceforge.net/ 00:05:37.965 00:05:37.965 00:05:37.965 Suite: nvme_quirks 00:05:37.965 Test: test_nvme_quirks_striping ...passed 00:05:37.965 00:05:37.965 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.965 suites 1 1 n/a 0 0 00:05:37.965 tests 1 1 1 0 0 00:05:37.965 asserts 5 5 5 0 n/a 00:05:37.965 00:05:37.965 Elapsed time = 0.000 seconds 00:05:37.965 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:05:37.965 00:05:37.965 00:05:37.965 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.965 http://cunit.sourceforge.net/ 00:05:37.965 00:05:37.965 00:05:37.965 Suite: nvme_tcp 00:05:37.965 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:05:37.965 Test: test_nvme_tcp_build_iovs ...passed 00:05:37.965 Test: test_nvme_tcp_build_sgl_request ...passed 00:05:37.965 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:05:37.965 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:05:37.965 Test: test_nvme_tcp_req_complete_safe ...passed 00:05:37.965 Test: test_nvme_tcp_req_get ...passed 00:05:37.965 Test: test_nvme_tcp_req_init ...passed 00:05:37.965 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:05:37.965 Test: test_nvme_tcp_qpair_write_pdu ...[2024-05-14 23:21:01.249258] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffcfb9b6830, and the iovcnt=16, remaining_size=28672 00:05:37.965 passed 00:05:37.965 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:05:37.965 Test: test_nvme_tcp_alloc_reqs ...passed 00:05:37.966 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-05-14 23:21:01.249832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b8540 is same with the state(6) to be set 00:05:37.966 [2024-05-14 23:21:01.250171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7700 is same with the state(5) to be set 00:05:37.966 passed 00:05:37.966 Test: test_nvme_tcp_pdu_ch_handle ...[2024-05-14 23:21:01.250260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffcfb9b8290 00:05:37.966 [2024-05-14 23:21:01.250339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1226:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:05:37.966 [2024-05-14 23:21:01.250444] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7bc0 is same with the state(5) to be set 00:05:37.966 [2024-05-14 23:21:01.250525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:05:37.966 [2024-05-14 23:21:01.250653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7bc0 is same with the state(5) to be set 00:05:37.966 [2024-05-14 23:21:01.250710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:05:37.966 [2024-05-14 23:21:01.250750] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7bc0 is same with the state(5) to be set 00:05:37.966 [2024-05-14 23:21:01.250816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7bc0 is same with the state(5) to be set 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_qpair_connect_sock ...[2024-05-14 23:21:01.250866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7bc0 is same with the state(5) to be set 00:05:38.225 [2024-05-14 23:21:01.250955] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7bc0 is same with the state(5) to be set 00:05:38.225 [2024-05-14 23:21:01.250999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7bc0 is same with the state(5) to be set 00:05:38.225 [2024-05-14 23:21:01.251059] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7bc0 is same with the state(5) to be set 00:05:38.225 [2024-05-14 23:21:01.251229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:05:38.225 [2024-05-14 23:21:01.251287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:05:38.225 Test: test_nvme_tcp_c2h_payload_handle ...[2024-05-14 23:21:01.251551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_icresp_handle ...[2024-05-14 23:21:01.251708] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcfb9b7dd0): PDU Sequence Error 00:05:38.225 [2024-05-14 23:21:01.251783] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:05:38.225 [2024-05-14 23:21:01.251837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1574:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:05:38.225 [2024-05-14 23:21:01.251885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7700 is same with the state(5) to be set 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_pdu_payload_handle ...[2024-05-14 23:21:01.251942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:05:38.225 [2024-05-14 23:21:01.251992] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7700 is same with the state(5) to be set 00:05:38.225 [2024-05-14 23:21:01.252077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b7700 is same with the state(0) to be set 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:05:38.225 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-05-14 23:21:01.252243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcfb9b8290): PDU Sequence Error 00:05:38.225 [2024-05-14 23:21:01.252346] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffcfb9b69d0 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-05-14 23:21:01.252546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffcfb9b6050, errno=0, rc=0 00:05:38.225 [2024-05-14 23:21:01.252613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b6050 is same with the state(5) to be set 00:05:38.225 [2024-05-14 23:21:01.252691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfb9b6050 is same with the state(5) to be set 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-05-14 23:21:01.252769] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcfb9b6050 (0): Success 00:05:38.225 [2024-05-14 23:21:01.252842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcfb9b6050 (0): Success 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:05:38.225 Test: test_nvme_tcp_poll_group_get_stats ...[2024-05-14 23:21:01.315729] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:05:38.225 [2024-05-14 23:21:01.315865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:38.225 [2024-05-14 23:21:01.316095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:38.225 passed 00:05:38.225 Test: test_nvme_tcp_ctrlr_construct ...passed 00:05:38.225 Test: test_nvme_tcp_qpair_submit_request ...[2024-05-14 23:21:01.316510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:38.225 [2024-05-14 23:21:01.316778] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:38.225 [2024-05-14 23:21:01.316832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:38.225 [2024-05-14 23:21:01.316943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:05:38.225 [2024-05-14 23:21:01.317009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:38.225 [2024-05-14 23:21:01.317135] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000000f80 with addr=192.168.1.78, port=23 00:05:38.225 [2024-05-14 23:21:01.317208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:05:38.225 passed 00:05:38.225 00:05:38.225 [2024-05-14 23:21:01.317343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:05:38.225 [2024-05-14 23:21:01.317397] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:05:38.225 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.225 suites 1 1 n/a 0 0 00:05:38.225 tests 27 27 27 0 0 00:05:38.225 asserts 624 624 624 0 n/a 00:05:38.225 00:05:38.225 Elapsed time = 0.070 seconds 00:05:38.225 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:05:38.225 00:05:38.225 00:05:38.225 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.225 http://cunit.sourceforge.net/ 00:05:38.225 00:05:38.225 00:05:38.225 Suite: nvme_transport 00:05:38.225 Test: test_nvme_get_transport ...passed 00:05:38.225 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:05:38.225 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:05:38.225 Test: test_nvme_transport_poll_group_add_remove ...passed 00:05:38.225 Test: test_ctrlr_get_memory_domains ...passed 00:05:38.225 00:05:38.225 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.225 suites 1 1 n/a 0 0 00:05:38.225 tests 5 5 5 0 0 00:05:38.226 asserts 28 28 28 0 n/a 00:05:38.226 00:05:38.226 Elapsed time = 0.000 seconds 00:05:38.226 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:05:38.226 00:05:38.226 00:05:38.226 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.226 http://cunit.sourceforge.net/ 00:05:38.226 00:05:38.226 00:05:38.226 Suite: nvme_io_msg 00:05:38.226 Test: test_nvme_io_msg_send ...passed 00:05:38.226 Test: test_nvme_io_msg_process ...passed 00:05:38.226 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:05:38.226 00:05:38.226 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.226 suites 1 1 n/a 0 0 00:05:38.226 tests 3 3 3 0 0 00:05:38.226 asserts 56 56 56 0 n/a 00:05:38.226 00:05:38.226 Elapsed time = 0.000 seconds 00:05:38.226 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:05:38.226 00:05:38.226 00:05:38.226 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.226 http://cunit.sourceforge.net/ 00:05:38.226 00:05:38.226 00:05:38.226 Suite: nvme_pcie_common 00:05:38.226 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:05:38.226 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:05:38.226 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...[2024-05-14 23:21:01.396785] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:05:38.226 passed 00:05:38.226 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-05-14 23:21:01.397356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:05:38.226 passed 00:05:38.226 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:05:38.226 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:05:38.226 00:05:38.226 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.226 suites 1 1 n/a 0 0 00:05:38.226 tests 6 6 6 0 0 00:05:38.226 asserts 148 148 148 0 n/a 00:05:38.226 00:05:38.226 Elapsed time = 0.000 seconds 00:05:38.226 [2024-05-14 23:21:01.397528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:05:38.226 [2024-05-14 23:21:01.397583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:05:38.226 [2024-05-14 23:21:01.397893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:38.226 [2024-05-14 23:21:01.397933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:38.226 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:05:38.226 00:05:38.226 00:05:38.226 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.226 http://cunit.sourceforge.net/ 00:05:38.226 00:05:38.226 00:05:38.226 Suite: nvme_fabric 00:05:38.226 Test: test_nvme_fabric_prop_set_cmd ...passed 00:05:38.226 Test: test_nvme_fabric_prop_get_cmd ...passed 00:05:38.226 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:05:38.226 Test: test_nvme_fabric_discover_probe ...passed 00:05:38.226 Test: test_nvme_fabric_qpair_connect ...passed 00:05:38.226 00:05:38.226 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.226 suites 1 1 n/a 0 0 00:05:38.226 tests 5 5 5 0 0 00:05:38.226 asserts 60 60 60 0 n/a 00:05:38.226 00:05:38.226 Elapsed time = 0.010 seconds 00:05:38.226 [2024-05-14 23:21:01.428064] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:05:38.226 23:21:01 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:05:38.226 00:05:38.226 00:05:38.226 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.226 http://cunit.sourceforge.net/ 00:05:38.226 00:05:38.226 00:05:38.226 Suite: nvme_opal 00:05:38.226 Test: test_opal_nvme_security_recv_send_done ...passed 00:05:38.226 Test: test_opal_add_short_atom_header ...passed 00:05:38.226 00:05:38.226 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.226 suites 1 1 n/a 0 0 00:05:38.226 tests 2 2 2 0 0 00:05:38.226 asserts 22 22 22 0 n/a 00:05:38.226 00:05:38.226 Elapsed time = 0.000 seconds 00:05:38.226 [2024-05-14 23:21:01.454469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:05:38.226 ************************************ 00:05:38.226 END TEST unittest_nvme 00:05:38.226 ************************************ 00:05:38.226 00:05:38.226 real 0m0.947s 00:05:38.226 user 0m0.390s 00:05:38.226 sys 0m0.412s 00:05:38.226 23:21:01 unittest.unittest_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.226 23:21:01 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:38.226 23:21:01 unittest -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:05:38.226 23:21:01 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.226 23:21:01 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.226 23:21:01 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:38.486 ************************************ 00:05:38.486 START TEST unittest_log 00:05:38.486 ************************************ 00:05:38.486 23:21:01 unittest.unittest_log -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:05:38.486 00:05:38.486 00:05:38.486 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.486 http://cunit.sourceforge.net/ 00:05:38.486 00:05:38.486 00:05:38.486 Suite: log 00:05:38.486 Test: log_test ...passed 00:05:38.486 Test: deprecation ...[2024-05-14 23:21:01.536980] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:05:38.486 [2024-05-14 23:21:01.537190] log_ut.c: 57:log_test: *DEBUG*: log test 00:05:38.486 log dump test: 00:05:38.486 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:05:38.486 spdk dump test: 00:05:38.486 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:05:38.486 spdk dump test: 00:05:38.486 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:05:38.486 00000010 65 20 63 68 61 72 73 e chars 00:05:39.423 passed 00:05:39.423 00:05:39.423 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.423 suites 1 1 n/a 0 0 00:05:39.423 tests 2 2 2 0 0 00:05:39.423 asserts 73 73 73 0 n/a 00:05:39.423 00:05:39.423 Elapsed time = 0.000 seconds 00:05:39.423 00:05:39.423 real 0m1.029s 00:05:39.423 user 0m0.014s 00:05:39.423 sys 0m0.016s 00:05:39.423 23:21:02 unittest.unittest_log -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.423 23:21:02 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:05:39.423 ************************************ 00:05:39.423 END TEST unittest_log 00:05:39.423 ************************************ 00:05:39.423 23:21:02 unittest -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:05:39.423 23:21:02 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.423 23:21:02 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.423 23:21:02 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:39.423 ************************************ 00:05:39.423 START TEST unittest_lvol 00:05:39.423 ************************************ 00:05:39.423 23:21:02 unittest.unittest_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:05:39.423 00:05:39.423 00:05:39.423 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.423 http://cunit.sourceforge.net/ 00:05:39.423 00:05:39.423 00:05:39.423 Suite: lvol 00:05:39.423 Test: lvs_init_unload_success ...passed 00:05:39.423 Test: lvs_init_destroy_success ...[2024-05-14 23:21:02.617374] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:05:39.423 [2024-05-14 23:21:02.617811] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:05:39.423 passed 00:05:39.423 Test: lvs_init_opts_success ...passed 00:05:39.423 Test: lvs_unload_lvs_is_null_fail ...[2024-05-14 23:21:02.617934] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:05:39.423 passed 00:05:39.423 Test: lvs_names ...[2024-05-14 23:21:02.617981] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:05:39.423 [2024-05-14 23:21:02.618022] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:05:39.423 [2024-05-14 23:21:02.618143] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:05:39.423 passed 00:05:39.423 Test: lvol_create_destroy_success ...passed 00:05:39.423 Test: lvol_create_fail ...passed 00:05:39.423 Test: lvol_destroy_fail ...passed 00:05:39.423 Test: lvol_close ...passed 00:05:39.423 Test: lvol_resize ...passed 00:05:39.423 Test: lvol_set_read_only ...[2024-05-14 23:21:02.618495] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:05:39.423 [2024-05-14 23:21:02.618631] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:05:39.423 [2024-05-14 23:21:02.618944] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:05:39.423 [2024-05-14 23:21:02.619280] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:05:39.423 [2024-05-14 23:21:02.619399] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:05:39.423 passed 00:05:39.423 Test: test_lvs_load ...passed 00:05:39.423 Test: lvols_load ...passed 00:05:39.423 Test: lvol_open ...passed 00:05:39.423 Test: lvol_snapshot ...passed 00:05:39.423 Test: lvol_snapshot_fail ...passed 00:05:39.423 Test: lvol_clone ...[2024-05-14 23:21:02.620593] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:05:39.423 [2024-05-14 23:21:02.620655] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:05:39.423 [2024-05-14 23:21:02.621015] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:05:39.423 [2024-05-14 23:21:02.621229] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:05:39.423 [2024-05-14 23:21:02.621836] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:05:39.423 passed 00:05:39.423 Test: lvol_clone_fail ...passed 00:05:39.423 Test: lvol_iter_clones ...passed 00:05:39.423 Test: lvol_refcnt ...passed 00:05:39.423 Test: lvol_names ...passed 00:05:39.423 Test: lvol_create_thin_provisioned ...[2024-05-14 23:21:02.622561] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:05:39.423 [2024-05-14 23:21:02.623024] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 403935ed-7092-4f95-a428-febab456cd27 because it is still open 00:05:39.423 [2024-05-14 23:21:02.623264] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:05:39.423 [2024-05-14 23:21:02.623394] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:39.423 [2024-05-14 23:21:02.623625] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:05:39.423 passed 00:05:39.423 Test: lvol_rename ...passed 00:05:39.423 Test: lvs_rename ...passed 00:05:39.423 Test: lvol_inflate ...passed 00:05:39.423 Test: lvol_decouple_parent ...passed 00:05:39.423 Test: lvol_get_xattr ...passed 00:05:39.423 Test: lvol_esnap_reload ...passed 00:05:39.423 Test: lvol_esnap_create_bad_args ...passed 00:05:39.423 Test: lvol_esnap_create_delete ...passed 00:05:39.423 Test: lvol_esnap_load_esnaps ...passed 00:05:39.423 Test: lvol_esnap_missing ...passed 00:05:39.423 Test: lvol_esnap_hotplug ... 00:05:39.423 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:05:39.423 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:05:39.423 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:05:39.423 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:05:39.423 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:05:39.423 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:05:39.423 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:05:39.423 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:05:39.423 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:05:39.423 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:05:39.423 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:05:39.423 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:05:39.423 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:05:39.423 passed 00:05:39.423 Test: lvol_get_by ...[2024-05-14 23:21:02.624092] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:39.423 [2024-05-14 23:21:02.624232] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:05:39.423 [2024-05-14 23:21:02.624448] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:05:39.423 [2024-05-14 23:21:02.624656] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:05:39.423 [2024-05-14 23:21:02.624903] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:05:39.423 [2024-05-14 23:21:02.625303] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:05:39.423 [2024-05-14 23:21:02.625349] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:05:39.423 [2024-05-14 23:21:02.625400] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:05:39.423 [2024-05-14 23:21:02.625565] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:05:39.423 [2024-05-14 23:21:02.625689] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:05:39.423 [2024-05-14 23:21:02.626040] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:05:39.423 [2024-05-14 23:21:02.626296] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:05:39.423 [2024-05-14 23:21:02.626369] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:05:39.423 [2024-05-14 23:21:02.627136] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 2aaa155e-51fa-4dbc-9211-5ef4e0ba2ec7: failed to create esnap bs_dev: error -12 00:05:39.423 [2024-05-14 23:21:02.627518] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 097e5977-af58-4d5f-9afd-e9ebf3e34eb3: failed to create esnap bs_dev: error -12 00:05:39.423 [2024-05-14 23:21:02.627729] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol f4aa6201-f322-449f-8f70-7fa1fa943321: failed to create esnap bs_dev: error -12 00:05:39.423 passed 00:05:39.423 Test: lvol_shallow_copy ...passed 00:05:39.423 00:05:39.423 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.423 suites 1 1 n/a 0 0 00:05:39.423 tests 35 35 35 0 0 00:05:39.423 asserts 1459 1459 1459 0 n/a 00:05:39.423 00:05:39.423 Elapsed time = 0.010 seconds 00:05:39.423 [2024-05-14 23:21:02.629566] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:39.423 [2024-05-14 23:21:02.629626] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 310fe702-527e-48c2-a50f-6c8e047eb617 shallow copy, ext_dev must not be NULL 00:05:39.423 ************************************ 00:05:39.423 END TEST unittest_lvol 00:05:39.423 ************************************ 00:05:39.424 00:05:39.424 real 0m0.046s 00:05:39.424 user 0m0.026s 00:05:39.424 sys 0m0.020s 00:05:39.424 23:21:02 unittest.unittest_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.424 23:21:02 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:39.424 23:21:02 unittest -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:39.424 23:21:02 unittest -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:05:39.424 23:21:02 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.424 23:21:02 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.424 23:21:02 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:39.424 ************************************ 00:05:39.424 START TEST unittest_nvme_rdma 00:05:39.424 ************************************ 00:05:39.424 23:21:02 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:05:39.683 00:05:39.683 00:05:39.683 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.683 http://cunit.sourceforge.net/ 00:05:39.683 00:05:39.683 00:05:39.683 Suite: nvme_rdma 00:05:39.683 Test: test_nvme_rdma_build_sgl_request ...passed 00:05:39.683 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:05:39.683 Test: test_nvme_rdma_build_contig_request ...[2024-05-14 23:21:02.710649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:05:39.683 [2024-05-14 23:21:02.710937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:05:39.683 [2024-05-14 23:21:02.711044] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:05:39.683 passed 00:05:39.683 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:05:39.683 Test: test_nvme_rdma_create_reqs ...passed 00:05:39.683 Test: test_nvme_rdma_create_rsps ...passed 00:05:39.683 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:05:39.683 Test: test_nvme_rdma_poller_create ...passed 00:05:39.683 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-05-14 23:21:02.711405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:05:39.683 [2024-05-14 23:21:02.711537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:05:39.683 [2024-05-14 23:21:02.711850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:05:39.683 [2024-05-14 23:21:02.712036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:05:39.683 [2024-05-14 23:21:02.712098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:05:39.683 passed 00:05:39.683 Test: test_nvme_rdma_ctrlr_construct ...[2024-05-14 23:21:02.712354] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:05:39.683 passed 00:05:39.683 Test: test_nvme_rdma_req_put_and_get ...passed 00:05:39.683 Test: test_nvme_rdma_req_init ...passed 00:05:39.683 Test: test_nvme_rdma_validate_cm_event ...passed 00:05:39.683 Test: test_nvme_rdma_qpair_init ...passed 00:05:39.683 Test: test_nvme_rdma_qpair_submit_request ...passed 00:05:39.683 Test: test_nvme_rdma_memory_domain ...passed 00:05:39.683 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:05:39.683 Test: test_rdma_get_memory_translation ...passed 00:05:39.683 Test: test_get_rdma_qpair_from_wc ...passed 00:05:39.683 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:05:39.683 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:05:39.683 Test: test_nvme_rdma_qpair_set_poller ...passed 00:05:39.683 00:05:39.683 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.683 suites 1 1 n/a 0 0 00:05:39.683 tests 22 22 22 0 0 00:05:39.683 asserts 412 412 412 0 n/a 00:05:39.683 00:05:39.683 Elapsed time = 0.010 seconds 00:05:39.683 [2024-05-14 23:21:02.712808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:05:39.683 [2024-05-14 23:21:02.712848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:05:39.683 [2024-05-14 23:21:02.712934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:05:39.683 [2024-05-14 23:21:02.712984] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:05:39.683 [2024-05-14 23:21:02.713064] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:05:39.683 [2024-05-14 23:21:02.713201] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:39.683 [2024-05-14 23:21:02.713243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:05:39.683 [2024-05-14 23:21:02.713430] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:05:39.683 [2024-05-14 23:21:02.713482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:05:39.683 [2024-05-14 23:21:02.713521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffec893a3a0 on poll group 0x60c000000040 00:05:39.683 [2024-05-14 23:21:02.713598] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:05:39.683 [2024-05-14 23:21:02.713635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:05:39.683 [2024-05-14 23:21:02.713660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffec893a3a0 on poll group 0x60c000000040 00:05:39.683 [2024-05-14 23:21:02.713719] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:05:39.683 ************************************ 00:05:39.683 END TEST unittest_nvme_rdma 00:05:39.683 ************************************ 00:05:39.683 00:05:39.683 real 0m0.033s 00:05:39.683 user 0m0.023s 00:05:39.683 sys 0m0.010s 00:05:39.683 23:21:02 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.683 23:21:02 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:39.683 23:21:02 unittest -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:05:39.683 23:21:02 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.683 23:21:02 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.683 23:21:02 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:39.683 ************************************ 00:05:39.683 START TEST unittest_nvmf_transport 00:05:39.683 ************************************ 00:05:39.683 23:21:02 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:05:39.683 00:05:39.683 00:05:39.683 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.683 http://cunit.sourceforge.net/ 00:05:39.683 00:05:39.683 00:05:39.683 Suite: nvmf 00:05:39.683 Test: test_spdk_nvmf_transport_create ...[2024-05-14 23:21:02.789518] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:05:39.683 [2024-05-14 23:21:02.789763] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:05:39.683 [2024-05-14 23:21:02.789800] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:05:39.683 [2024-05-14 23:21:02.789895] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:05:39.683 passed 00:05:39.683 Test: test_nvmf_transport_poll_group_create ...passed 00:05:39.683 Test: test_spdk_nvmf_transport_opts_init ...[2024-05-14 23:21:02.790023] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:05:39.684 [2024-05-14 23:21:02.790116] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:05:39.684 [2024-05-14 23:21:02.790142] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:05:39.684 passed 00:05:39.684 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:05:39.684 00:05:39.684 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.684 suites 1 1 n/a 0 0 00:05:39.684 tests 4 4 4 0 0 00:05:39.684 asserts 49 49 49 0 n/a 00:05:39.684 00:05:39.684 Elapsed time = 0.000 seconds 00:05:39.684 00:05:39.684 real 0m0.029s 00:05:39.684 user 0m0.014s 00:05:39.684 sys 0m0.016s 00:05:39.684 ************************************ 00:05:39.684 END TEST unittest_nvmf_transport 00:05:39.684 ************************************ 00:05:39.684 23:21:02 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.684 23:21:02 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:05:39.684 23:21:02 unittest -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:05:39.684 23:21:02 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.684 23:21:02 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.684 23:21:02 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:39.684 ************************************ 00:05:39.684 START TEST unittest_rdma 00:05:39.684 ************************************ 00:05:39.684 23:21:02 unittest.unittest_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:05:39.684 00:05:39.684 00:05:39.684 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.684 http://cunit.sourceforge.net/ 00:05:39.684 00:05:39.684 00:05:39.684 Suite: rdma_common 00:05:39.684 Test: test_spdk_rdma_pd ...[2024-05-14 23:21:02.863962] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:05:39.684 [2024-05-14 23:21:02.864221] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:05:39.684 passed 00:05:39.684 00:05:39.684 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.684 suites 1 1 n/a 0 0 00:05:39.684 tests 1 1 1 0 0 00:05:39.684 asserts 31 31 31 0 n/a 00:05:39.684 00:05:39.684 Elapsed time = 0.000 seconds 00:05:39.684 00:05:39.684 real 0m0.029s 00:05:39.684 user 0m0.013s 00:05:39.684 sys 0m0.016s 00:05:39.684 ************************************ 00:05:39.684 END TEST unittest_rdma 00:05:39.684 ************************************ 00:05:39.684 23:21:02 unittest.unittest_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.684 23:21:02 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:39.684 23:21:02 unittest -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:39.684 23:21:02 unittest -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:05:39.684 23:21:02 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.684 23:21:02 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.684 23:21:02 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:39.684 ************************************ 00:05:39.684 START TEST unittest_nvme_cuse 00:05:39.684 ************************************ 00:05:39.684 23:21:02 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:05:39.684 00:05:39.684 00:05:39.684 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.684 http://cunit.sourceforge.net/ 00:05:39.684 00:05:39.684 00:05:39.684 Suite: nvme_cuse 00:05:39.684 Test: test_cuse_nvme_submit_io_read_write ...passed 00:05:39.684 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:05:39.684 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:05:39.684 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:05:39.684 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:05:39.684 Test: test_cuse_nvme_submit_io ...passed 00:05:39.684 Test: test_cuse_nvme_reset ...passed 00:05:39.684 Test: test_nvme_cuse_stop ...[2024-05-14 23:21:02.940443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:05:39.684 [2024-05-14 23:21:02.940686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:05:40.319 passed 00:05:40.319 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:05:40.319 00:05:40.319 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.319 suites 1 1 n/a 0 0 00:05:40.319 tests 9 9 9 0 0 00:05:40.319 asserts 118 118 118 0 n/a 00:05:40.319 00:05:40.319 Elapsed time = 0.500 seconds 00:05:40.319 00:05:40.319 real 0m0.529s 00:05:40.319 user 0m0.246s 00:05:40.319 sys 0m0.284s 00:05:40.319 ************************************ 00:05:40.319 END TEST unittest_nvme_cuse 00:05:40.319 ************************************ 00:05:40.319 23:21:03 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.319 23:21:03 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:05:40.319 23:21:03 unittest -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:05:40.319 23:21:03 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.319 23:21:03 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.319 23:21:03 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:40.319 ************************************ 00:05:40.319 START TEST unittest_nvmf 00:05:40.319 ************************************ 00:05:40.319 23:21:03 unittest.unittest_nvmf -- common/autotest_common.sh@1121 -- # unittest_nvmf 00:05:40.319 23:21:03 unittest.unittest_nvmf -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:05:40.319 00:05:40.319 00:05:40.319 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.319 http://cunit.sourceforge.net/ 00:05:40.319 00:05:40.319 00:05:40.319 Suite: nvmf 00:05:40.319 Test: test_get_log_page ...passed 00:05:40.319 Test: test_process_fabrics_cmd ...[2024-05-14 23:21:03.517746] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:05:40.319 [2024-05-14 23:21:03.517971] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:05:40.319 passed 00:05:40.319 Test: test_connect ...[2024-05-14 23:21:03.518346] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1006:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:05:40.319 [2024-05-14 23:21:03.518431] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 869:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:05:40.319 [2024-05-14 23:21:03.518459] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1045:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:05:40.319 [2024-05-14 23:21:03.518486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:05:40.319 [2024-05-14 23:21:03.518575] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 880:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:05:40.319 [2024-05-14 23:21:03.518617] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 887:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:05:40.319 [2024-05-14 23:21:03.518645] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:05:40.319 [2024-05-14 23:21:03.518672] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 920:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:05:40.319 [2024-05-14 23:21:03.518722] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:05:40.319 [2024-05-14 23:21:03.518766] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 670:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:05:40.319 [2024-05-14 23:21:03.518850] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:05:40.319 [2024-05-14 23:21:03.518906] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:05:40.319 [2024-05-14 23:21:03.518948] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:05:40.319 [2024-05-14 23:21:03.518989] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 713:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:05:40.319 [2024-05-14 23:21:03.519041] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 293:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:05:40.319 [2024-05-14 23:21:03.519107] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:05:40.319 [2024-05-14 23:21:03.519157] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:05:40.319 passed 00:05:40.319 Test: test_get_ns_id_desc_list ...passed 00:05:40.319 Test: test_identify_ns ...[2024-05-14 23:21:03.519328] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:40.319 [2024-05-14 23:21:03.519485] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:05:40.319 [2024-05-14 23:21:03.519544] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:05:40.319 passed 00:05:40.319 Test: test_identify_ns_iocs_specific ...[2024-05-14 23:21:03.519629] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:40.319 [2024-05-14 23:21:03.519768] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:05:40.319 passed 00:05:40.319 Test: test_reservation_write_exclusive ...passed 00:05:40.319 Test: test_reservation_exclusive_access ...passed 00:05:40.319 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:05:40.319 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:05:40.319 Test: test_reservation_notification_log_page ...passed 00:05:40.319 Test: test_get_dif_ctx ...passed 00:05:40.319 Test: test_set_get_features ...passed 00:05:40.319 Test: test_identify_ctrlr ...passed 00:05:40.319 Test: test_identify_ctrlr_iocs_specific ...[2024-05-14 23:21:03.520208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:05:40.319 [2024-05-14 23:21:03.520252] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:05:40.319 [2024-05-14 23:21:03.520278] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1653:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:05:40.319 [2024-05-14 23:21:03.520304] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1729:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:05:40.319 passed 00:05:40.319 Test: test_custom_admin_cmd ...passed 00:05:40.319 Test: test_fused_compare_and_write ...[2024-05-14 23:21:03.520559] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4212:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:05:40.319 [2024-05-14 23:21:03.520583] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4201:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:05:40.319 [2024-05-14 23:21:03.520615] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4219:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:05:40.319 passed 00:05:40.319 Test: test_multi_async_event_reqs ...passed 00:05:40.319 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:05:40.319 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:05:40.319 Test: test_multi_async_events ...passed 00:05:40.319 Test: test_rae ...passed 00:05:40.319 Test: test_nvmf_ctrlr_create_destruct ...passed 00:05:40.319 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:05:40.319 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:05:40.319 Test: test_zcopy_read ...[2024-05-14 23:21:03.520855] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:05:40.320 [2024-05-14 23:21:03.520891] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:05:40.320 passed 00:05:40.320 Test: test_zcopy_write ...passed 00:05:40.320 Test: test_nvmf_property_set ...passed 00:05:40.320 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:05:40.320 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-05-14 23:21:03.520990] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:05:40.320 [2024-05-14 23:21:03.521022] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:05:40.320 [2024-05-14 23:21:03.521064] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1963:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:05:40.320 [2024-05-14 23:21:03.521084] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:05:40.320 passed 00:05:40.320 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:05:40.320 Test: test_nvmf_check_qpair_active ...[2024-05-14 23:21:03.521126] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1981:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:05:40.320 [2024-05-14 23:21:03.521196] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:05:40.320 [2024-05-14 23:21:03.521231] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4691:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:05:40.320 [2024-05-14 23:21:03.521255] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:05:40.320 [2024-05-14 23:21:03.521282] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:05:40.320 [2024-05-14 23:21:03.521302] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:05:40.320 passed 00:05:40.320 00:05:40.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.320 suites 1 1 n/a 0 0 00:05:40.320 tests 32 32 32 0 0 00:05:40.320 asserts 977 977 977 0 n/a 00:05:40.320 00:05:40.320 Elapsed time = 0.010 seconds 00:05:40.320 23:21:03 unittest.unittest_nvmf -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:05:40.320 00:05:40.320 00:05:40.320 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.320 http://cunit.sourceforge.net/ 00:05:40.320 00:05:40.320 00:05:40.320 Suite: nvmf 00:05:40.320 Test: test_get_rw_params ...passed 00:05:40.320 Test: test_get_rw_ext_params ...passed 00:05:40.320 Test: test_lba_in_range ...passed 00:05:40.320 Test: test_get_dif_ctx ...passed 00:05:40.320 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:05:40.320 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:05:40.320 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-05-14 23:21:03.553564] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:05:40.320 [2024-05-14 23:21:03.553817] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:05:40.320 [2024-05-14 23:21:03.553897] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:05:40.320 [2024-05-14 23:21:03.553953] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:05:40.320 [2024-05-14 23:21:03.554028] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:05:40.320 passed 00:05:40.320 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:05:40.320 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:05:40.320 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed[2024-05-14 23:21:03.554115] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:05:40.320 [2024-05-14 23:21:03.554141] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:05:40.320 [2024-05-14 23:21:03.554205] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:05:40.320 [2024-05-14 23:21:03.554233] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:05:40.320 00:05:40.320 00:05:40.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.320 suites 1 1 n/a 0 0 00:05:40.320 tests 10 10 10 0 0 00:05:40.320 asserts 159 159 159 0 n/a 00:05:40.320 00:05:40.320 Elapsed time = 0.000 seconds 00:05:40.320 23:21:03 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:05:40.583 00:05:40.583 00:05:40.583 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.583 http://cunit.sourceforge.net/ 00:05:40.583 00:05:40.583 00:05:40.583 Suite: nvmf 00:05:40.583 Test: test_discovery_log ...passed 00:05:40.583 Test: test_discovery_log_with_filters ...passed 00:05:40.583 00:05:40.583 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.583 suites 1 1 n/a 0 0 00:05:40.583 tests 2 2 2 0 0 00:05:40.583 asserts 238 238 238 0 n/a 00:05:40.583 00:05:40.583 Elapsed time = 0.000 seconds 00:05:40.583 23:21:03 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:05:40.583 00:05:40.584 00:05:40.584 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.584 http://cunit.sourceforge.net/ 00:05:40.584 00:05:40.584 00:05:40.584 Suite: nvmf 00:05:40.584 Test: nvmf_test_create_subsystem ...[2024-05-14 23:21:03.619454] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:05:40.584 [2024-05-14 23:21:03.619948] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:05:40.584 [2024-05-14 23:21:03.620066] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:05:40.584 [2024-05-14 23:21:03.620492] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:05:40.584 [2024-05-14 23:21:03.620566] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:05:40.584 [2024-05-14 23:21:03.620614] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:05:40.584 [2024-05-14 23:21:03.620893] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:05:40.584 [2024-05-14 23:21:03.620957] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:05:40.584 [2024-05-14 23:21:03.620985] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:05:40.584 [2024-05-14 23:21:03.621017] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:05:40.584 [2024-05-14 23:21:03.621050] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:05:40.584 [2024-05-14 23:21:03.621085] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:05:40.584 [2024-05-14 23:21:03.621137] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:05:40.584 [2024-05-14 23:21:03.621530] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:05:40.584 [2024-05-14 23:21:03.621610] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:05:40.584 [2024-05-14 23:21:03.621651] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:05:40.584 passed 00:05:40.584 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-05-14 23:21:03.621687] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:05:40.584 [2024-05-14 23:21:03.621943] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:05:40.584 [2024-05-14 23:21:03.621974] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:05:40.584 [2024-05-14 23:21:03.622035] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:05:40.584 [2024-05-14 23:21:03.622070] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:05:40.584 [2024-05-14 23:21:03.622095] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:05:40.584 [2024-05-14 23:21:03.622286] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2003:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:05:40.584 [2024-05-14 23:21:03.622335] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1984:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:05:40.584 passed 00:05:40.584 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-05-14 23:21:03.622515] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2112:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:05:40.584 passed 00:05:40.584 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:05:40.584 Test: test_spdk_nvmf_ns_visible ...[2024-05-14 23:21:03.622955] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:05:40.584 passed 00:05:40.584 Test: test_reservation_register ...[2024-05-14 23:21:03.623748] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3051:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:40.584 [2024-05-14 23:21:03.623829] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3109:nvmf_ns_reservation_register: *ERROR*: No registrant 00:05:40.584 passed 00:05:40.584 Test: test_reservation_register_with_ptpl ...passed 00:05:40.584 Test: test_reservation_acquire_preempt_1 ...passed 00:05:40.584 Test: test_reservation_acquire_release_with_ptpl ...[2024-05-14 23:21:03.625083] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3051:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:40.584 passed 00:05:40.584 Test: test_reservation_release ...passed 00:05:40.584 Test: test_reservation_unregister_notification ...[2024-05-14 23:21:03.626585] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3051:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:40.584 [2024-05-14 23:21:03.626815] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3051:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:40.584 passed 00:05:40.584 Test: test_reservation_release_notification ...[2024-05-14 23:21:03.627388] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3051:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:40.584 passed 00:05:40.584 Test: test_reservation_release_notification_write_exclusive ...[2024-05-14 23:21:03.627661] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3051:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:40.584 passed 00:05:40.584 Test: test_reservation_clear_notification ...[2024-05-14 23:21:03.627938] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3051:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:40.584 passed 00:05:40.584 Test: test_reservation_preempt_notification ...passed 00:05:40.584 Test: test_spdk_nvmf_ns_event ...[2024-05-14 23:21:03.628450] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3051:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:05:40.584 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:05:40.584 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:05:40.584 Test: test_nvmf_ns_reservation_report ...passed 00:05:40.584 Test: test_nvmf_nqn_is_valid ...[2024-05-14 23:21:03.629042] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:05:40.584 [2024-05-14 23:21:03.629105] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1036:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:05:40.584 [2024-05-14 23:21:03.629219] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3414:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:05:40.584 [2024-05-14 23:21:03.629284] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:05:40.584 [2024-05-14 23:21:03.629340] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:40a1c2ae-26aa-4049-86fa-500dc99bddc": uuid is not the correct length 00:05:40.584 [2024-05-14 23:21:03.629373] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_ns_reservation_restore ...[2024-05-14 23:21:03.629790] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2608:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_subsystem_state_change ...passed 00:05:40.584 Test: test_nvmf_reservation_custom_ops ...passed 00:05:40.584 00:05:40.584 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.584 suites 1 1 n/a 0 0 00:05:40.584 tests 24 24 24 0 0 00:05:40.584 asserts 499 499 499 0 n/a 00:05:40.584 00:05:40.584 Elapsed time = 0.010 seconds 00:05:40.584 23:21:03 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:05:40.584 00:05:40.584 00:05:40.584 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.584 http://cunit.sourceforge.net/ 00:05:40.584 00:05:40.584 00:05:40.584 Suite: nvmf 00:05:40.584 Test: test_nvmf_tcp_create ...[2024-05-14 23:21:03.685692] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_tcp_destroy ...passed 00:05:40.584 Test: test_nvmf_tcp_poll_group_create ...passed 00:05:40.584 Test: test_nvmf_tcp_send_c2h_data ...passed 00:05:40.584 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:05:40.584 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:05:40.584 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:05:40.584 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-05-14 23:21:03.795595] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:05:40.584 Test: test_nvmf_tcp_icreq_handle ...[2024-05-14 23:21:03.795685] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2910 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.795775] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2910 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.795815] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.795839] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2910 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.796038] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:05:40.584 [2024-05-14 23:21:03.796337] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.796447] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2910 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.796482] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:05:40.584 [2024-05-14 23:21:03.796514] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2910 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.796749] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.796802] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2910 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.796836] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_tcp_check_xfer_type ...passed 00:05:40.584 Test: test_nvmf_tcp_invalid_sgl ...[2024-05-14 23:21:03.796893] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2910 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.797057] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2508:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-05-14 23:21:03.797324] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.797370] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2910 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.797424] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2240:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7fffabad3670 00:05:40.584 [2024-05-14 23:21:03.797513] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.797566] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.797619] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2297:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7fffabad2dd0 00:05:40.584 [2024-05-14 23:21:03.797653] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.797688] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.797711] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2250:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:05:40.584 [2024-05-14 23:21:03.797741] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.797785] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.797914] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2289:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:05:40.584 [2024-05-14 23:21:03.797950] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.797981] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.798210] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.798263] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.798325] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.798352] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.798389] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.798413] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.798443] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.798715] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.798784] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.799045] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 [2024-05-14 23:21:03.799100] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:05:40.584 [2024-05-14 23:21:03.799124] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffabad2dd0 is same with the state(5) to be set 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:05:40.584 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-05-14 23:21:03.820372] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:05:40.584 passed 00:05:40.584 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-05-14 23:21:03.820448] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:05:40.584 [2024-05-14 23:21:03.820663] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:05:40.585 [2024-05-14 23:21:03.820965] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:05:40.585 passed 00:05:40.585 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-05-14 23:21:03.821316] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:05:40.585 passed 00:05:40.585 00:05:40.585 [2024-05-14 23:21:03.821365] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:05:40.585 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.585 suites 1 1 n/a 0 0 00:05:40.585 tests 17 17 17 0 0 00:05:40.585 asserts 222 222 222 0 n/a 00:05:40.585 00:05:40.585 Elapsed time = 0.160 seconds 00:05:40.847 23:21:03 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:05:40.847 00:05:40.847 00:05:40.847 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.847 http://cunit.sourceforge.net/ 00:05:40.847 00:05:40.847 00:05:40.847 Suite: nvmf 00:05:40.847 Test: test_nvmf_tgt_create_poll_group ...passed 00:05:40.847 00:05:40.847 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.847 suites 1 1 n/a 0 0 00:05:40.847 tests 1 1 1 0 0 00:05:40.847 asserts 17 17 17 0 n/a 00:05:40.847 00:05:40.847 Elapsed time = 0.020 seconds 00:05:40.847 00:05:40.847 real 0m0.469s 00:05:40.847 user 0m0.201s 00:05:40.847 sys 0m0.271s 00:05:40.847 ************************************ 00:05:40.847 END TEST unittest_nvmf 00:05:40.847 ************************************ 00:05:40.847 23:21:03 unittest.unittest_nvmf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.847 23:21:03 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:05:40.847 23:21:04 unittest -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:40.847 23:21:04 unittest -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:40.847 23:21:04 unittest -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:05:40.847 23:21:04 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.847 23:21:04 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.847 23:21:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:40.848 ************************************ 00:05:40.848 START TEST unittest_nvmf_rdma 00:05:40.848 ************************************ 00:05:40.848 23:21:04 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:05:40.848 00:05:40.848 00:05:40.848 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.848 http://cunit.sourceforge.net/ 00:05:40.848 00:05:40.848 00:05:40.848 Suite: nvmf 00:05:40.848 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-05-14 23:21:04.044780] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1860:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:05:40.848 [2024-05-14 23:21:04.045451] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1910:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:05:40.848 [2024-05-14 23:21:04.045502] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1910:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:05:40.848 passed 00:05:40.848 Test: test_spdk_nvmf_rdma_request_process ...passed 00:05:40.848 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:05:40.848 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:05:40.848 Test: test_nvmf_rdma_opts_init ...passed 00:05:40.848 Test: test_nvmf_rdma_request_free_data ...passed 00:05:40.848 Test: test_nvmf_rdma_resources_create ...passed 00:05:40.848 Test: test_nvmf_rdma_qpair_compare ...passed 00:05:40.848 Test: test_nvmf_rdma_resize_cq ...[2024-05-14 23:21:04.047960] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 949:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:05:40.848 Using CQ of insufficient size may lead to CQ overrun 00:05:40.848 [2024-05-14 23:21:04.048066] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:05:40.848 passed 00:05:40.848 00:05:40.848 [2024-05-14 23:21:04.048142] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:05:40.848 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.848 suites 1 1 n/a 0 0 00:05:40.848 tests 9 9 9 0 0 00:05:40.848 asserts 579 579 579 0 n/a 00:05:40.848 00:05:40.848 Elapsed time = 0.000 seconds 00:05:40.848 00:05:40.848 real 0m0.031s 00:05:40.848 user 0m0.011s 00:05:40.848 sys 0m0.020s 00:05:40.848 23:21:04 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.848 23:21:04 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:40.848 ************************************ 00:05:40.848 END TEST unittest_nvmf_rdma 00:05:40.848 ************************************ 00:05:40.848 23:21:04 unittest -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:40.848 23:21:04 unittest -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:05:40.848 23:21:04 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.848 23:21:04 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.848 23:21:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:40.848 ************************************ 00:05:40.848 START TEST unittest_scsi 00:05:40.848 ************************************ 00:05:40.848 23:21:04 unittest.unittest_scsi -- common/autotest_common.sh@1121 -- # unittest_scsi 00:05:40.848 23:21:04 unittest.unittest_scsi -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:05:40.848 00:05:40.848 00:05:40.848 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.848 http://cunit.sourceforge.net/ 00:05:40.848 00:05:40.848 00:05:40.848 Suite: dev_suite 00:05:40.848 Test: dev_destruct_null_dev ...passed 00:05:40.848 Test: dev_destruct_zero_luns ...passed 00:05:40.848 Test: dev_destruct_null_lun ...passed 00:05:40.848 Test: dev_destruct_success ...passed 00:05:40.848 Test: dev_construct_num_luns_zero ...[2024-05-14 23:21:04.122477] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:05:40.848 passed 00:05:40.848 Test: dev_construct_no_lun_zero ...[2024-05-14 23:21:04.122921] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:05:40.848 passed 00:05:40.848 Test: dev_construct_null_lun ...passed 00:05:40.848 Test: dev_construct_name_too_long ...[2024-05-14 23:21:04.122972] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:05:40.848 [2024-05-14 23:21:04.123014] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:05:40.848 passed 00:05:40.848 Test: dev_construct_success ...passed 00:05:40.848 Test: dev_construct_success_lun_zero_not_first ...passed 00:05:40.848 Test: dev_queue_mgmt_task_success ...passed 00:05:40.848 Test: dev_queue_task_success ...passed 00:05:40.848 Test: dev_stop_success ...passed 00:05:40.848 Test: dev_add_port_max_ports ...[2024-05-14 23:21:04.123776] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:05:40.848 passed 00:05:40.848 Test: dev_add_port_construct_failure1 ...[2024-05-14 23:21:04.124096] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:05:40.848 passed 00:05:40.848 Test: dev_add_port_construct_failure2 ...[2024-05-14 23:21:04.124398] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:05:40.848 passed 00:05:40.848 Test: dev_add_port_success1 ...passed 00:05:40.848 Test: dev_add_port_success2 ...passed 00:05:40.848 Test: dev_add_port_success3 ...passed 00:05:40.848 Test: dev_find_port_by_id_num_ports_zero ...passed 00:05:40.848 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:05:40.848 Test: dev_find_port_by_id_success ...passed 00:05:40.848 Test: dev_add_lun_bdev_not_found ...passed 00:05:40.848 Test: dev_add_lun_no_free_lun_id ...[2024-05-14 23:21:04.125202] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:05:40.848 passed 00:05:40.848 Test: dev_add_lun_success1 ...passed 00:05:40.848 Test: dev_add_lun_success2 ...passed 00:05:40.848 Test: dev_check_pending_tasks ...passed 00:05:40.848 Test: dev_iterate_luns ...passed 00:05:40.848 Test: dev_find_free_lun ...passed 00:05:40.848 00:05:40.848 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.848 suites 1 1 n/a 0 0 00:05:40.848 tests 29 29 29 0 0 00:05:40.848 asserts 97 97 97 0 n/a 00:05:40.848 00:05:40.848 Elapsed time = 0.000 seconds 00:05:41.107 23:21:04 unittest.unittest_scsi -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:05:41.107 00:05:41.107 00:05:41.107 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.107 http://cunit.sourceforge.net/ 00:05:41.107 00:05:41.107 00:05:41.107 Suite: lun_suite 00:05:41.107 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-05-14 23:21:04.153083] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:05:41.107 passed 00:05:41.107 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-05-14 23:21:04.153676] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:05:41.107 passed 00:05:41.107 Test: lun_task_mgmt_execute_lun_reset ...passed 00:05:41.107 Test: lun_task_mgmt_execute_target_reset ...passed 00:05:41.107 Test: lun_task_mgmt_execute_invalid_case ...passed[2024-05-14 23:21:04.153869] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:05:41.107 00:05:41.107 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:05:41.107 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:05:41.107 Test: lun_append_task_null_lun_not_supported ...passed 00:05:41.107 Test: lun_execute_scsi_task_pending ...passed 00:05:41.107 Test: lun_execute_scsi_task_complete ...passed 00:05:41.107 Test: lun_execute_scsi_task_resize ...passed 00:05:41.107 Test: lun_destruct_success ...passed 00:05:41.107 Test: lun_construct_null_ctx ...passed 00:05:41.107 Test: lun_construct_success ...[2024-05-14 23:21:04.154245] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:05:41.107 passed 00:05:41.107 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:05:41.107 Test: lun_reset_task_suspend_scsi_task ...passed 00:05:41.107 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:05:41.107 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:05:41.107 00:05:41.107 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.107 suites 1 1 n/a 0 0 00:05:41.107 tests 18 18 18 0 0 00:05:41.107 asserts 153 153 153 0 n/a 00:05:41.107 00:05:41.107 Elapsed time = 0.010 seconds 00:05:41.107 23:21:04 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:05:41.107 00:05:41.107 00:05:41.107 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.107 http://cunit.sourceforge.net/ 00:05:41.108 00:05:41.108 00:05:41.108 Suite: scsi_suite 00:05:41.108 Test: scsi_init ...passed 00:05:41.108 00:05:41.108 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.108 suites 1 1 n/a 0 0 00:05:41.108 tests 1 1 1 0 0 00:05:41.108 asserts 1 1 1 0 n/a 00:05:41.108 00:05:41.108 Elapsed time = 0.000 seconds 00:05:41.108 23:21:04 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:05:41.108 00:05:41.108 00:05:41.108 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.108 http://cunit.sourceforge.net/ 00:05:41.108 00:05:41.108 00:05:41.108 Suite: translation_suite 00:05:41.108 Test: mode_select_6_test ...passed 00:05:41.108 Test: mode_select_6_test2 ...passed 00:05:41.108 Test: mode_sense_6_test ...passed 00:05:41.108 Test: mode_sense_10_test ...passed 00:05:41.108 Test: inquiry_evpd_test ...passed 00:05:41.108 Test: inquiry_standard_test ...passed 00:05:41.108 Test: inquiry_overflow_test ...passed 00:05:41.108 Test: task_complete_test ...passed 00:05:41.108 Test: lba_range_test ...passed 00:05:41.108 Test: xfer_len_test ...[2024-05-14 23:21:04.202752] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:05:41.108 passed 00:05:41.108 Test: xfer_test ...passed 00:05:41.108 Test: scsi_name_padding_test ...passed 00:05:41.108 Test: get_dif_ctx_test ...passed 00:05:41.108 Test: unmap_split_test ...passed 00:05:41.108 00:05:41.108 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.108 suites 1 1 n/a 0 0 00:05:41.108 tests 14 14 14 0 0 00:05:41.108 asserts 1205 1205 1205 0 n/a 00:05:41.108 00:05:41.108 Elapsed time = 0.000 seconds 00:05:41.108 23:21:04 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:05:41.108 00:05:41.108 00:05:41.108 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.108 http://cunit.sourceforge.net/ 00:05:41.108 00:05:41.108 00:05:41.108 Suite: reservation_suite 00:05:41.108 Test: test_reservation_register ...[2024-05-14 23:21:04.225993] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:41.108 passed 00:05:41.108 Test: test_reservation_reserve ...[2024-05-14 23:21:04.226546] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:41.108 [2024-05-14 23:21:04.226607] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:05:41.108 [2024-05-14 23:21:04.226801] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:05:41.108 passed 00:05:41.108 Test: test_reservation_preempt_non_all_regs ...[2024-05-14 23:21:04.227088] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:41.108 [2024-05-14 23:21:04.227423] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:05:41.108 passed 00:05:41.108 Test: test_reservation_preempt_all_regs ...[2024-05-14 23:21:04.227607] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:41.108 passed 00:05:41.108 Test: test_reservation_cmds_conflict ...[2024-05-14 23:21:04.227761] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:41.108 [2024-05-14 23:21:04.227998] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:05:41.108 [2024-05-14 23:21:04.228052] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:05:41.108 [2024-05-14 23:21:04.228078] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:05:41.108 [2024-05-14 23:21:04.228107] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:05:41.108 [2024-05-14 23:21:04.228130] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:05:41.108 passed 00:05:41.108 Test: test_scsi2_reserve_release ...passed 00:05:41.108 Test: test_pr_with_scsi2_reserve_release ...passed[2024-05-14 23:21:04.228661] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:05:41.108 00:05:41.108 00:05:41.108 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.108 suites 1 1 n/a 0 0 00:05:41.108 tests 7 7 7 0 0 00:05:41.108 asserts 257 257 257 0 n/a 00:05:41.108 00:05:41.108 Elapsed time = 0.000 seconds 00:05:41.108 ************************************ 00:05:41.108 END TEST unittest_scsi 00:05:41.108 ************************************ 00:05:41.108 00:05:41.108 real 0m0.129s 00:05:41.108 user 0m0.060s 00:05:41.108 sys 0m0.071s 00:05:41.108 23:21:04 unittest.unittest_scsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.108 23:21:04 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:05:41.108 23:21:04 unittest -- unit/unittest.sh@276 -- # uname -s 00:05:41.108 23:21:04 unittest -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:05:41.108 23:21:04 unittest -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:05:41.108 23:21:04 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.108 23:21:04 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.108 23:21:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:41.108 ************************************ 00:05:41.108 START TEST unittest_sock 00:05:41.108 ************************************ 00:05:41.108 23:21:04 unittest.unittest_sock -- common/autotest_common.sh@1121 -- # unittest_sock 00:05:41.108 23:21:04 unittest.unittest_sock -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:05:41.108 00:05:41.108 00:05:41.108 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.108 http://cunit.sourceforge.net/ 00:05:41.108 00:05:41.108 00:05:41.108 Suite: sock 00:05:41.108 Test: posix_sock ...passed 00:05:41.108 Test: ut_sock ...passed 00:05:41.108 Test: posix_sock_group ...passed 00:05:41.108 Test: ut_sock_group ...passed 00:05:41.108 Test: posix_sock_group_fairness ...passed 00:05:41.108 Test: _posix_sock_close ...passed 00:05:41.108 Test: sock_get_default_opts ...passed 00:05:41.108 Test: ut_sock_impl_get_set_opts ...passed 00:05:41.108 Test: posix_sock_impl_get_set_opts ...passed 00:05:41.108 Test: ut_sock_map ...passed 00:05:41.108 Test: override_impl_opts ...passed 00:05:41.108 Test: ut_sock_group_get_ctx ...passed 00:05:41.108 00:05:41.108 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.108 suites 1 1 n/a 0 0 00:05:41.108 tests 12 12 12 0 0 00:05:41.108 asserts 349 349 349 0 n/a 00:05:41.108 00:05:41.108 Elapsed time = 0.010 seconds 00:05:41.108 23:21:04 unittest.unittest_sock -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:05:41.108 00:05:41.108 00:05:41.108 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.108 http://cunit.sourceforge.net/ 00:05:41.108 00:05:41.108 00:05:41.108 Suite: posix 00:05:41.108 Test: flush ...passed 00:05:41.108 00:05:41.108 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.108 suites 1 1 n/a 0 0 00:05:41.108 tests 1 1 1 0 0 00:05:41.108 asserts 28 28 28 0 n/a 00:05:41.108 00:05:41.108 Elapsed time = 0.000 seconds 00:05:41.108 23:21:04 unittest.unittest_sock -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:41.108 00:05:41.108 real 0m0.097s 00:05:41.108 user 0m0.036s 00:05:41.108 sys 0m0.039s 00:05:41.108 23:21:04 unittest.unittest_sock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.108 23:21:04 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:05:41.108 ************************************ 00:05:41.108 END TEST unittest_sock 00:05:41.108 ************************************ 00:05:41.367 23:21:04 unittest -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:05:41.367 23:21:04 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.367 23:21:04 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.367 23:21:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:41.367 ************************************ 00:05:41.367 START TEST unittest_thread 00:05:41.367 ************************************ 00:05:41.367 23:21:04 unittest.unittest_thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:05:41.367 00:05:41.367 00:05:41.367 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.367 http://cunit.sourceforge.net/ 00:05:41.367 00:05:41.367 00:05:41.367 Suite: io_channel 00:05:41.367 Test: thread_alloc ...passed 00:05:41.367 Test: thread_send_msg ...passed 00:05:41.367 Test: thread_poller ...passed 00:05:41.367 Test: poller_pause ...passed 00:05:41.367 Test: thread_for_each ...passed 00:05:41.367 Test: for_each_channel_remove ...passed 00:05:41.367 Test: for_each_channel_unreg ...[2024-05-14 23:21:04.458895] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x7fff0d511e20 already registered (old:0x613000000200 new:0x6130000003c0) 00:05:41.367 passed 00:05:41.367 Test: thread_name ...passed 00:05:41.367 Test: channel ...[2024-05-14 23:21:04.461496] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x492120 00:05:41.367 passed 00:05:41.367 Test: channel_destroy_races ...passed 00:05:41.367 Test: thread_exit_test ...[2024-05-14 23:21:04.464599] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 635:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:05:41.367 passed 00:05:41.367 Test: thread_update_stats_test ...passed 00:05:41.367 Test: nested_channel ...passed 00:05:41.367 Test: device_unregister_and_thread_exit_race ...passed 00:05:41.367 Test: cache_closest_timed_poller ...passed 00:05:41.367 Test: multi_timed_pollers_have_same_expiration ...passed 00:05:41.367 Test: io_device_lookup ...passed 00:05:41.367 Test: spdk_spin ...[2024-05-14 23:21:04.470808] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:05:41.367 [2024-05-14 23:21:04.470861] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7fff0d511e00 00:05:41.367 [2024-05-14 23:21:04.471024] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:05:41.367 [2024-05-14 23:21:04.472174] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:41.368 [2024-05-14 23:21:04.472233] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7fff0d511e00 00:05:41.368 [2024-05-14 23:21:04.472259] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:05:41.368 [2024-05-14 23:21:04.472295] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7fff0d511e00 00:05:41.368 [2024-05-14 23:21:04.472324] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:05:41.368 [2024-05-14 23:21:04.472349] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7fff0d511e00 00:05:41.368 [2024-05-14 23:21:04.472568] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:05:41.368 [2024-05-14 23:21:04.472658] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7fff0d511e00 00:05:41.368 passed 00:05:41.368 Test: for_each_channel_and_thread_exit_race ...passed 00:05:41.368 Test: for_each_thread_and_thread_exit_race ...passed 00:05:41.368 00:05:41.368 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.368 suites 1 1 n/a 0 0 00:05:41.368 tests 20 20 20 0 0 00:05:41.368 asserts 409 409 409 0 n/a 00:05:41.368 00:05:41.368 Elapsed time = 0.040 seconds 00:05:41.368 00:05:41.368 real 0m0.064s 00:05:41.368 user 0m0.043s 00:05:41.368 sys 0m0.022s 00:05:41.368 23:21:04 unittest.unittest_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.368 23:21:04 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.368 ************************************ 00:05:41.368 END TEST unittest_thread 00:05:41.368 ************************************ 00:05:41.368 23:21:04 unittest -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:05:41.368 23:21:04 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.368 23:21:04 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.368 23:21:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:41.368 ************************************ 00:05:41.368 START TEST unittest_iobuf 00:05:41.368 ************************************ 00:05:41.368 23:21:04 unittest.unittest_iobuf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:05:41.368 00:05:41.368 00:05:41.368 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.368 http://cunit.sourceforge.net/ 00:05:41.368 00:05:41.368 00:05:41.368 Suite: io_channel 00:05:41.368 Test: iobuf ...passed 00:05:41.368 Test: iobuf_cache ...[2024-05-14 23:21:04.551679] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:05:41.368 [2024-05-14 23:21:04.551961] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:41.368 [2024-05-14 23:21:04.552175] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:05:41.368 [2024-05-14 23:21:04.552219] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:41.368 [2024-05-14 23:21:04.552349] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:05:41.368 [2024-05-14 23:21:04.552504] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:41.368 passed 00:05:41.368 00:05:41.368 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.368 suites 1 1 n/a 0 0 00:05:41.368 tests 2 2 2 0 0 00:05:41.368 asserts 107 107 107 0 n/a 00:05:41.368 00:05:41.368 Elapsed time = 0.000 seconds 00:05:41.368 00:05:41.368 real 0m0.029s 00:05:41.368 user 0m0.014s 00:05:41.368 sys 0m0.016s 00:05:41.368 23:21:04 unittest.unittest_iobuf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.368 23:21:04 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:05:41.368 ************************************ 00:05:41.368 END TEST unittest_iobuf 00:05:41.368 ************************************ 00:05:41.368 23:21:04 unittest -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:05:41.368 23:21:04 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.368 23:21:04 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.368 23:21:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:41.368 ************************************ 00:05:41.368 START TEST unittest_util 00:05:41.368 ************************************ 00:05:41.368 23:21:04 unittest.unittest_util -- common/autotest_common.sh@1121 -- # unittest_util 00:05:41.368 23:21:04 unittest.unittest_util -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:05:41.368 00:05:41.368 00:05:41.368 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.368 http://cunit.sourceforge.net/ 00:05:41.368 00:05:41.368 00:05:41.368 Suite: base64 00:05:41.368 Test: test_base64_get_encoded_strlen ...passed 00:05:41.368 Test: test_base64_get_decoded_len ...passed 00:05:41.368 Test: test_base64_encode ...passed 00:05:41.368 Test: test_base64_decode ...passed 00:05:41.368 Test: test_base64_urlsafe_encode ...passed 00:05:41.368 Test: test_base64_urlsafe_decode ...passed 00:05:41.368 00:05:41.368 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.368 suites 1 1 n/a 0 0 00:05:41.368 tests 6 6 6 0 0 00:05:41.368 asserts 112 112 112 0 n/a 00:05:41.368 00:05:41.368 Elapsed time = 0.000 seconds 00:05:41.368 23:21:04 unittest.unittest_util -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:05:41.368 00:05:41.368 00:05:41.368 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.368 http://cunit.sourceforge.net/ 00:05:41.368 00:05:41.368 00:05:41.368 Suite: bit_array 00:05:41.368 Test: test_1bit ...passed 00:05:41.368 Test: test_64bit ...passed 00:05:41.368 Test: test_find ...passed 00:05:41.368 Test: test_resize ...passed 00:05:41.368 Test: test_errors ...passed 00:05:41.368 Test: test_count ...passed 00:05:41.368 Test: test_mask_store_load ...passed 00:05:41.368 Test: test_mask_clear ...passed 00:05:41.368 00:05:41.368 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.368 suites 1 1 n/a 0 0 00:05:41.368 tests 8 8 8 0 0 00:05:41.368 asserts 5075 5075 5075 0 n/a 00:05:41.368 00:05:41.368 Elapsed time = 0.000 seconds 00:05:41.766 23:21:04 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:05:41.766 00:05:41.766 00:05:41.766 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.766 http://cunit.sourceforge.net/ 00:05:41.766 00:05:41.766 00:05:41.766 Suite: cpuset 00:05:41.766 Test: test_cpuset ...passed 00:05:41.766 Test: test_cpuset_parse ...[2024-05-14 23:21:04.681809] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:05:41.766 [2024-05-14 23:21:04.682235] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:05:41.766 [2024-05-14 23:21:04.682389] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:05:41.766 [2024-05-14 23:21:04.682575] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:05:41.766 [2024-05-14 23:21:04.682645] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:05:41.766 [2024-05-14 23:21:04.682702] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:05:41.766 [2024-05-14 23:21:04.682747] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:05:41.766 [2024-05-14 23:21:04.682834] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:05:41.766 passed 00:05:41.766 Test: test_cpuset_fmt ...passed 00:05:41.766 00:05:41.766 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.766 suites 1 1 n/a 0 0 00:05:41.766 tests 3 3 3 0 0 00:05:41.766 asserts 65 65 65 0 n/a 00:05:41.766 00:05:41.766 Elapsed time = 0.010 seconds 00:05:41.766 23:21:04 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:05:41.766 00:05:41.766 00:05:41.766 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.766 http://cunit.sourceforge.net/ 00:05:41.766 00:05:41.766 00:05:41.766 Suite: crc16 00:05:41.766 Test: test_crc16_t10dif ...passed 00:05:41.766 Test: test_crc16_t10dif_seed ...passed 00:05:41.766 Test: test_crc16_t10dif_copy ...passed 00:05:41.766 00:05:41.766 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.766 suites 1 1 n/a 0 0 00:05:41.766 tests 3 3 3 0 0 00:05:41.766 asserts 5 5 5 0 n/a 00:05:41.766 00:05:41.766 Elapsed time = 0.000 seconds 00:05:41.766 23:21:04 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:05:41.766 00:05:41.766 00:05:41.766 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.766 http://cunit.sourceforge.net/ 00:05:41.766 00:05:41.766 00:05:41.766 Suite: crc32_ieee 00:05:41.766 Test: test_crc32_ieee ...passed 00:05:41.766 00:05:41.766 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.766 suites 1 1 n/a 0 0 00:05:41.766 tests 1 1 1 0 0 00:05:41.766 asserts 1 1 1 0 n/a 00:05:41.766 00:05:41.766 Elapsed time = 0.000 seconds 00:05:41.766 23:21:04 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:05:41.766 00:05:41.766 00:05:41.766 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.766 http://cunit.sourceforge.net/ 00:05:41.766 00:05:41.766 00:05:41.766 Suite: crc32c 00:05:41.766 Test: test_crc32c ...passed 00:05:41.766 Test: test_crc32c_nvme ...passed 00:05:41.766 00:05:41.766 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.766 suites 1 1 n/a 0 0 00:05:41.766 tests 2 2 2 0 0 00:05:41.766 asserts 16 16 16 0 n/a 00:05:41.766 00:05:41.766 Elapsed time = 0.000 seconds 00:05:41.766 23:21:04 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:05:41.766 00:05:41.766 00:05:41.766 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.766 http://cunit.sourceforge.net/ 00:05:41.766 00:05:41.766 00:05:41.766 Suite: crc64 00:05:41.766 Test: test_crc64_nvme ...passed 00:05:41.766 00:05:41.766 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.766 suites 1 1 n/a 0 0 00:05:41.766 tests 1 1 1 0 0 00:05:41.766 asserts 4 4 4 0 n/a 00:05:41.766 00:05:41.766 Elapsed time = 0.000 seconds 00:05:41.766 23:21:04 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:05:41.766 00:05:41.766 00:05:41.766 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.766 http://cunit.sourceforge.net/ 00:05:41.766 00:05:41.767 00:05:41.767 Suite: string 00:05:41.767 Test: test_parse_ip_addr ...passed 00:05:41.767 Test: test_str_chomp ...passed 00:05:41.767 Test: test_parse_capacity ...passed 00:05:41.767 Test: test_sprintf_append_realloc ...passed 00:05:41.767 Test: test_strtol ...passed 00:05:41.767 Test: test_strtoll ...passed 00:05:41.767 Test: test_strarray ...passed 00:05:41.767 Test: test_strcpy_replace ...passed 00:05:41.767 00:05:41.767 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.767 suites 1 1 n/a 0 0 00:05:41.767 tests 8 8 8 0 0 00:05:41.767 asserts 161 161 161 0 n/a 00:05:41.767 00:05:41.767 Elapsed time = 0.000 seconds 00:05:41.767 23:21:04 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:05:41.767 00:05:41.767 00:05:41.767 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.767 http://cunit.sourceforge.net/ 00:05:41.767 00:05:41.767 00:05:41.767 Suite: dif 00:05:41.767 Test: dif_generate_and_verify_test ...[2024-05-14 23:21:04.806664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:41.767 [2024-05-14 23:21:04.807089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:41.767 [2024-05-14 23:21:04.807487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:05:41.767 [2024-05-14 23:21:04.807716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:41.767 [2024-05-14 23:21:04.808049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:41.767 [2024-05-14 23:21:04.808300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:05:41.767 passed 00:05:41.767 Test: dif_disable_check_test ...[2024-05-14 23:21:04.809018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:41.767 [2024-05-14 23:21:04.809261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:41.767 [2024-05-14 23:21:04.809539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:05:41.767 passed 00:05:41.767 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-05-14 23:21:04.810254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:05:41.767 [2024-05-14 23:21:04.810589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:05:41.767 [2024-05-14 23:21:04.810869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:05:41.767 [2024-05-14 23:21:04.811262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:05:41.767 [2024-05-14 23:21:04.811426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:41.767 [2024-05-14 23:21:04.811708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:41.767 [2024-05-14 23:21:04.811982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:41.767 [2024-05-14 23:21:04.812283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:05:41.767 [2024-05-14 23:21:04.812488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:41.767 [2024-05-14 23:21:04.812721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:41.767 [2024-05-14 23:21:04.812996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:05:41.767 passed 00:05:41.767 Test: dif_apptag_mask_test ...[2024-05-14 23:21:04.813353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:05:41.767 [2024-05-14 23:21:04.813643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:05:41.767 passed 00:05:41.767 Test: dif_sec_512_md_0_error_test ...[2024-05-14 23:21:04.813736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:41.767 passed 00:05:41.767 Test: dif_sec_4096_md_0_error_test ...[2024-05-14 23:21:04.813979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:41.767 [2024-05-14 23:21:04.814103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:41.767 passed 00:05:41.767 Test: dif_sec_4100_md_128_error_test ...[2024-05-14 23:21:04.814275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:05:41.767 passed 00:05:41.767 Test: dif_guard_seed_test ...[2024-05-14 23:21:04.814393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:05:41.767 passed 00:05:41.767 Test: dif_guard_value_test ...passed 00:05:41.767 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:05:41.767 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:41.767 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:05:41.767 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:05:41.767 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:41.767 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:05:41.767 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:05:41.767 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:05:41.767 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:41.767 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-14 23:21:04.840413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.767 [2024-05-14 23:21:04.841760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:05:41.767 [2024-05-14 23:21:04.843167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.767 [2024-05-14 23:21:04.844477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.767 [2024-05-14 23:21:04.845872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.767 [2024-05-14 23:21:04.847189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.767 [2024-05-14 23:21:04.848600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.767 [2024-05-14 23:21:04.849503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a6ed 00:05:41.767 [2024-05-14 23:21:04.850321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.767 [2024-05-14 23:21:04.851467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:05:41.767 [2024-05-14 23:21:04.852715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.767 [2024-05-14 23:21:04.853865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.767 [2024-05-14 23:21:04.855090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.767 [2024-05-14 23:21:04.856246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.767 [2024-05-14 23:21:04.857464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:41.767 [2024-05-14 23:21:04.858209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=633d6351 00:05:41.767 [2024-05-14 23:21:04.859573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:41.767 [2024-05-14 23:21:04.861250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:05:41.767 [2024-05-14 23:21:04.863012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.767 [2024-05-14 23:21:04.864688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.767 [2024-05-14 23:21:04.866359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.767 [2024-05-14 23:21:04.867995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.767 [2024-05-14 23:21:04.869696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3bd7c6055e219604 00:05:41.767 passed 00:05:41.767 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-05-14 23:21:04.870908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=d7e34448691402cb 00:05:41.767 [2024-05-14 23:21:04.871187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.767 [2024-05-14 23:21:04.871523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:05:41.767 [2024-05-14 23:21:04.871825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.767 [2024-05-14 23:21:04.872056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.767 [2024-05-14 23:21:04.872417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.872668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.872973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.768 [2024-05-14 23:21:04.873188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a6ed 00:05:41.768 [2024-05-14 23:21:04.873385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.768 [2024-05-14 23:21:04.873674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:05:41.768 [2024-05-14 23:21:04.873901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.874204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.874486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.874699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.874986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:41.768 [2024-05-14 23:21:04.875175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=633d6351 00:05:41.768 [2024-05-14 23:21:04.875541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:41.768 [2024-05-14 23:21:04.875834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:05:41.768 [2024-05-14 23:21:04.876215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.876504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.876915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.768 [2024-05-14 23:21:04.877212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.768 [2024-05-14 23:21:04.877619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3bd7c6055e219604 00:05:41.768 [2024-05-14 23:21:04.877898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=d7e34448691402cb 00:05:41.768 passed 00:05:41.768 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-05-14 23:21:04.878282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.768 [2024-05-14 23:21:04.878652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:05:41.768 [2024-05-14 23:21:04.878887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.879206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.879454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.879783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.880021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.768 [2024-05-14 23:21:04.880340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a6ed 00:05:41.768 [2024-05-14 23:21:04.880524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.768 [2024-05-14 23:21:04.880766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:05:41.768 [2024-05-14 23:21:04.881001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.881286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.881496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.881750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.881971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:41.768 [2024-05-14 23:21:04.882248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=633d6351 00:05:41.768 [2024-05-14 23:21:04.882560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:41.768 [2024-05-14 23:21:04.882930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:05:41.768 [2024-05-14 23:21:04.883247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.883608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.883907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.768 [2024-05-14 23:21:04.884271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.768 [2024-05-14 23:21:04.884583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3bd7c6055e219604 00:05:41.768 [2024-05-14 23:21:04.884938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=d7e34448691402cb 00:05:41.768 passed 00:05:41.768 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-05-14 23:21:04.885305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.768 [2024-05-14 23:21:04.885563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:05:41.768 [2024-05-14 23:21:04.885902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.886143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.886509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.886760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.887098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.768 [2024-05-14 23:21:04.887329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a6ed 00:05:41.768 [2024-05-14 23:21:04.887619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.768 [2024-05-14 23:21:04.887817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:05:41.768 [2024-05-14 23:21:04.888108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.888332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.888611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.888821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.889113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:41.768 [2024-05-14 23:21:04.889315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=633d6351 00:05:41.768 [2024-05-14 23:21:04.889688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:41.768 [2024-05-14 23:21:04.889989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:05:41.768 [2024-05-14 23:21:04.890364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.890669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.891084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.768 [2024-05-14 23:21:04.891397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.768 [2024-05-14 23:21:04.891802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3bd7c6055e219604 00:05:41.768 [2024-05-14 23:21:04.892082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=d7e34448691402cb 00:05:41.768 passed 00:05:41.768 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-05-14 23:21:04.892497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.768 [2024-05-14 23:21:04.892800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:05:41.768 [2024-05-14 23:21:04.893034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.893353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.768 [2024-05-14 23:21:04.893612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.893948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.768 [2024-05-14 23:21:04.894200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.768 [2024-05-14 23:21:04.894505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a6ed 00:05:41.768 passed 00:05:41.768 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-05-14 23:21:04.894819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.769 [2024-05-14 23:21:04.895025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:05:41.769 [2024-05-14 23:21:04.895329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.895538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.895849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.896052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.896363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:41.769 [2024-05-14 23:21:04.896550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=633d6351 00:05:41.769 [2024-05-14 23:21:04.896925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:41.769 [2024-05-14 23:21:04.897232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:05:41.769 [2024-05-14 23:21:04.897623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.897921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.898305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.769 [2024-05-14 23:21:04.898616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.769 [2024-05-14 23:21:04.898995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3bd7c6055e219604 00:05:41.769 [2024-05-14 23:21:04.899282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=d7e34448691402cb 00:05:41.769 passed 00:05:41.769 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-05-14 23:21:04.899642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.769 [2024-05-14 23:21:04.899979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:05:41.769 [2024-05-14 23:21:04.900226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.900556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.900819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.901145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.901398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.769 [2024-05-14 23:21:04.901691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a6ed 00:05:41.769 passed 00:05:41.769 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-05-14 23:21:04.901935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.769 [2024-05-14 23:21:04.902213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:05:41.769 [2024-05-14 23:21:04.902515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.902740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.903035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.903242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.903564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:41.769 [2024-05-14 23:21:04.903752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=633d6351 00:05:41.769 [2024-05-14 23:21:04.904143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:41.769 [2024-05-14 23:21:04.904443] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:05:41.769 [2024-05-14 23:21:04.904815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.905108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.905518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.769 [2024-05-14 23:21:04.905807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.769 [2024-05-14 23:21:04.906209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3bd7c6055e219604 00:05:41.769 [2024-05-14 23:21:04.906487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=d7e34448691402cb 00:05:41.769 passed 00:05:41.769 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:05:41.769 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:41.769 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:05:41.769 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:41.769 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:05:41.769 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:05:41.769 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:41.769 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:05:41.769 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:41.769 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-14 23:21:04.930960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.769 [2024-05-14 23:21:04.931913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=6687, Actual=6e87 00:05:41.769 [2024-05-14 23:21:04.932790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.933742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.934630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.935633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.936507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.769 [2024-05-14 23:21:04.937477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=75da 00:05:41.769 [2024-05-14 23:21:04.938199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.769 [2024-05-14 23:21:04.938996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b182d7dd, Actual=b982d7dd 00:05:41.769 [2024-05-14 23:21:04.939717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.940537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.941245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.942009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.942730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:41.769 [2024-05-14 23:21:04.943525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=b1cc659d 00:05:41.769 [2024-05-14 23:21:04.944691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:41.769 [2024-05-14 23:21:04.945976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a6efcfffd6802d0d, Actual=a6efcfffde802d0d 00:05:41.769 [2024-05-14 23:21:04.947139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.948400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.949563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.769 [2024-05-14 23:21:04.950841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.769 [2024-05-14 23:21:04.952009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3bd7c6055e219604 00:05:41.769 passed 00:05:41.769 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-14 23:21:04.953299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=6784647e77f9900f 00:05:41.769 [2024-05-14 23:21:04.953628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.769 [2024-05-14 23:21:04.953968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=6687, Actual=6e87 00:05:41.769 [2024-05-14 23:21:04.954240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.954583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.769 [2024-05-14 23:21:04.954876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.955242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.769 [2024-05-14 23:21:04.955506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.769 [2024-05-14 23:21:04.955854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=75da 00:05:41.769 [2024-05-14 23:21:04.956068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.770 [2024-05-14 23:21:04.956377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b182d7dd, Actual=b982d7dd 00:05:41.770 [2024-05-14 23:21:04.956621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.770 [2024-05-14 23:21:04.956923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.770 [2024-05-14 23:21:04.957143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.770 [2024-05-14 23:21:04.957461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.770 [2024-05-14 23:21:04.957685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:41.770 [2024-05-14 23:21:04.957987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=b1cc659d 00:05:41.770 [2024-05-14 23:21:04.958358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:41.770 [2024-05-14 23:21:04.958786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a6efcfffd6802d0d, Actual=a6efcfffde802d0d 00:05:41.770 [2024-05-14 23:21:04.959143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.770 [2024-05-14 23:21:04.959579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.770 [2024-05-14 23:21:04.959936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.770 [2024-05-14 23:21:04.960368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:05:41.770 [2024-05-14 23:21:04.960736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3bd7c6055e219604 00:05:41.770 passed 00:05:41.770 Test: dix_sec_512_md_0_error ...[2024-05-14 23:21:04.961182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=6784647e77f9900f 00:05:41.770 [2024-05-14 23:21:04.961242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:05:41.770 passed 00:05:41.770 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:05:41.770 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:05:41.770 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:05:41.770 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:05:41.770 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:05:41.770 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:05:41.770 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:05:41.770 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:05:41.770 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:05:41.770 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-14 23:21:04.985237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:05:41.770 [2024-05-14 23:21:04.986189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=6687, Actual=6e87 00:05:41.770 [2024-05-14 23:21:04.987075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.770 [2024-05-14 23:21:04.988040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.770 [2024-05-14 23:21:04.988938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.770 [2024-05-14 23:21:04.989904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.770 [2024-05-14 23:21:04.990851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=ad61 00:05:41.770 [2024-05-14 23:21:04.991856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=75da 00:05:41.770 [2024-05-14 23:21:04.992574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:05:41.770 [2024-05-14 23:21:04.993393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b182d7dd, Actual=b982d7dd 00:05:41.770 [2024-05-14 23:21:04.994127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.770 [2024-05-14 23:21:04.994961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:05:41.770 [2024-05-14 23:21:04.995667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:41.770 [2024-05-14 23:21:04.996469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:05:42.043 [2024-05-14 23:21:04.997172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e6262fae 00:05:42.043 [2024-05-14 23:21:04.997975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=b1cc659d 00:05:42.043 [2024-05-14 23:21:04.999171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:05:42.043 [2024-05-14 23:21:05.000570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5758b76783ef5, Actual=a8b5748b76783ef5 00:05:42.043 [2024-05-14 23:21:05.001878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=188 00:05:42.043 [2024-05-14 23:21:05.003302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=188 00:05:42.044 [2024-05-14 23:21:05.004610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1000000005b 00:05:42.044 [2024-05-14 23:21:05.006015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1000000005b 00:05:42.044 [2024-05-14 23:21:05.007342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=b69820bad6ee392c 00:05:42.044 passed 00:05:42.044 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-14 23:21:05.008754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=abdd0337a12812a7 00:05:42.044 [2024-05-14 23:21:05.009131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fc4c, Actual=fd4c 00:05:42.044 [2024-05-14 23:21:05.009547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=c829, Actual=c929 00:05:42.044 [2024-05-14 23:21:05.009837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=188 00:05:42.044 [2024-05-14 23:21:05.010206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=188 00:05:42.044 [2024-05-14 23:21:05.010508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=159 00:05:42.044 [2024-05-14 23:21:05.010882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=159 00:05:42.044 [2024-05-14 23:21:05.011196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=9592 00:05:42.044 [2024-05-14 23:21:05.011555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=ea53 00:05:42.044 [2024-05-14 23:21:05.011787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab752ed, Actual=1ab753ed 00:05:42.044 [2024-05-14 23:21:05.012134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ccf8d1dc, Actual=ccf8d0dc 00:05:42.044 [2024-05-14 23:21:05.012389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=188 00:05:42.044 [2024-05-14 23:21:05.012700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=188 00:05:42.044 [2024-05-14 23:21:05.012924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000059 00:05:42.044 [2024-05-14 23:21:05.013221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000059 00:05:42.044 [2024-05-14 23:21:05.013439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=c620b0b8 00:05:42.044 [2024-05-14 23:21:05.013726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=ad6e4f27 00:05:42.044 [2024-05-14 23:21:05.014113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:05:42.044 [2024-05-14 23:21:05.014616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9d2dad61db6f1347, Actual=9d2dac61db6f1347 00:05:42.044 [2024-05-14 23:21:05.015008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=188 00:05:42.044 [2024-05-14 23:21:05.015498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=188 00:05:42.044 [2024-05-14 23:21:05.015892] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000059 00:05:42.044 [2024-05-14 23:21:05.016371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000059 00:05:42.044 [2024-05-14 23:21:05.016765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=b69820bad6ee392c 00:05:42.044 passed 00:05:42.044 Test: set_md_interleave_iovs_test ...[2024-05-14 23:21:05.017238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=4b4097d79e14193d 00:05:42.044 passed 00:05:42.044 Test: set_md_interleave_iovs_split_test ...passed 00:05:42.044 Test: dif_generate_stream_pi_16_test ...passed 00:05:42.044 Test: dif_generate_stream_test ...passed 00:05:42.044 Test: set_md_interleave_iovs_alignment_test ...passed 00:05:42.044 Test: dif_generate_split_test ...[2024-05-14 23:21:05.022395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:05:42.044 passed 00:05:42.044 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:05:42.044 Test: dif_verify_split_test ...passed 00:05:42.044 Test: dif_verify_stream_multi_segments_test ...passed 00:05:42.044 Test: update_crc32c_pi_16_test ...passed 00:05:42.044 Test: update_crc32c_test ...passed 00:05:42.044 Test: dif_update_crc32c_split_test ...passed 00:05:42.044 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:05:42.044 Test: get_range_with_md_test ...passed 00:05:42.044 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:05:42.044 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:05:42.044 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:05:42.044 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:05:42.044 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:05:42.044 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:05:42.044 Test: dif_generate_and_verify_unmap_test ...passed 00:05:42.044 00:05:42.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.044 suites 1 1 n/a 0 0 00:05:42.044 tests 79 79 79 0 0 00:05:42.044 asserts 3584 3584 3584 0 n/a 00:05:42.044 00:05:42.044 Elapsed time = 0.250 seconds 00:05:42.044 23:21:05 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:05:42.044 00:05:42.044 00:05:42.044 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.044 http://cunit.sourceforge.net/ 00:05:42.044 00:05:42.044 00:05:42.044 Suite: iov 00:05:42.044 Test: test_single_iov ...passed 00:05:42.044 Test: test_simple_iov ...passed 00:05:42.044 Test: test_complex_iov ...passed 00:05:42.044 Test: test_iovs_to_buf ...passed 00:05:42.044 Test: test_buf_to_iovs ...passed 00:05:42.044 Test: test_memset ...passed 00:05:42.044 Test: test_iov_one ...passed 00:05:42.044 Test: test_iov_xfer ...passed 00:05:42.044 00:05:42.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.044 suites 1 1 n/a 0 0 00:05:42.044 tests 8 8 8 0 0 00:05:42.044 asserts 156 156 156 0 n/a 00:05:42.044 00:05:42.044 Elapsed time = 0.000 seconds 00:05:42.044 23:21:05 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:05:42.044 00:05:42.044 00:05:42.044 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.044 http://cunit.sourceforge.net/ 00:05:42.044 00:05:42.044 00:05:42.044 Suite: math 00:05:42.044 Test: test_serial_number_arithmetic ...passed 00:05:42.044 Suite: erase 00:05:42.044 Test: test_memset_s ...passed 00:05:42.044 00:05:42.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.044 suites 2 2 n/a 0 0 00:05:42.044 tests 2 2 2 0 0 00:05:42.044 asserts 18 18 18 0 n/a 00:05:42.044 00:05:42.044 Elapsed time = 0.000 seconds 00:05:42.044 23:21:05 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:05:42.044 00:05:42.044 00:05:42.044 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.044 http://cunit.sourceforge.net/ 00:05:42.044 00:05:42.044 00:05:42.044 Suite: pipe 00:05:42.044 Test: test_create_destroy ...passed 00:05:42.044 Test: test_write_get_buffer ...passed 00:05:42.044 Test: test_write_advance ...passed 00:05:42.044 Test: test_read_get_buffer ...passed 00:05:42.044 Test: test_read_advance ...passed 00:05:42.044 Test: test_data ...passed 00:05:42.044 00:05:42.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.044 suites 1 1 n/a 0 0 00:05:42.044 tests 6 6 6 0 0 00:05:42.044 asserts 251 251 251 0 n/a 00:05:42.044 00:05:42.044 Elapsed time = 0.000 seconds 00:05:42.044 23:21:05 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:05:42.044 00:05:42.044 00:05:42.044 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.044 http://cunit.sourceforge.net/ 00:05:42.044 00:05:42.044 00:05:42.044 Suite: xor 00:05:42.044 Test: test_xor_gen ...passed 00:05:42.044 00:05:42.044 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.044 suites 1 1 n/a 0 0 00:05:42.044 tests 1 1 1 0 0 00:05:42.044 asserts 17 17 17 0 n/a 00:05:42.044 00:05:42.044 Elapsed time = 0.000 seconds 00:05:42.044 00:05:42.044 real 0m0.533s 00:05:42.044 user 0m0.349s 00:05:42.044 sys 0m0.190s 00:05:42.044 23:21:05 unittest.unittest_util -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.044 23:21:05 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:05:42.044 ************************************ 00:05:42.044 END TEST unittest_util 00:05:42.044 ************************************ 00:05:42.044 23:21:05 unittest -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:42.044 23:21:05 unittest -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:05:42.044 23:21:05 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.044 23:21:05 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.044 23:21:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:42.044 ************************************ 00:05:42.044 START TEST unittest_vhost 00:05:42.044 ************************************ 00:05:42.045 23:21:05 unittest.unittest_vhost -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:05:42.045 00:05:42.045 00:05:42.045 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.045 http://cunit.sourceforge.net/ 00:05:42.045 00:05:42.045 00:05:42.045 Suite: vhost_suite 00:05:42.045 Test: desc_to_iov_test ...[2024-05-14 23:21:05.211027] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:05:42.045 passed 00:05:42.045 Test: create_controller_test ...[2024-05-14 23:21:05.213823] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:05:42.045 [2024-05-14 23:21:05.213910] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:05:42.045 [2024-05-14 23:21:05.214206] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:05:42.045 [2024-05-14 23:21:05.214280] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:05:42.045 [2024-05-14 23:21:05.214308] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:05:42.045 [2024-05-14 23:21:05.214814] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1780:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:05:42.045 [2024-05-14 23:21:05.215536] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:05:42.045 passed 00:05:42.045 Test: session_find_by_vid_test ...passed 00:05:42.045 Test: remove_controller_test ...[2024-05-14 23:21:05.216944] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1865:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:05:42.045 passed 00:05:42.045 Test: vq_avail_ring_get_test ...passed 00:05:42.045 Test: vq_packed_ring_test ...passed 00:05:42.045 Test: vhost_blk_construct_test ...passed 00:05:42.045 00:05:42.045 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.045 suites 1 1 n/a 0 0 00:05:42.045 tests 7 7 7 0 0 00:05:42.045 asserts 147 147 147 0 n/a 00:05:42.045 00:05:42.045 Elapsed time = 0.020 seconds 00:05:42.045 00:05:42.045 real 0m0.036s 00:05:42.045 user 0m0.019s 00:05:42.045 sys 0m0.018s 00:05:42.045 23:21:05 unittest.unittest_vhost -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.045 23:21:05 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:05:42.045 ************************************ 00:05:42.045 END TEST unittest_vhost 00:05:42.045 ************************************ 00:05:42.045 23:21:05 unittest -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:05:42.045 23:21:05 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.045 23:21:05 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.045 23:21:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:42.045 ************************************ 00:05:42.045 START TEST unittest_dma 00:05:42.045 ************************************ 00:05:42.045 23:21:05 unittest.unittest_dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:05:42.045 00:05:42.045 00:05:42.045 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.045 http://cunit.sourceforge.net/ 00:05:42.045 00:05:42.045 00:05:42.045 Suite: dma_suite 00:05:42.045 Test: test_dma ...passed 00:05:42.045 00:05:42.045 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.045 suites 1 1 n/a 0 0 00:05:42.045 tests 1 1 1 0 0 00:05:42.045 asserts 54 54 54 0 n/a 00:05:42.045 00:05:42.045 Elapsed time = 0.000 seconds 00:05:42.045 [2024-05-14 23:21:05.289799] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:05:42.045 00:05:42.045 real 0m0.026s 00:05:42.045 user 0m0.015s 00:05:42.045 sys 0m0.012s 00:05:42.045 ************************************ 00:05:42.045 END TEST unittest_dma 00:05:42.045 ************************************ 00:05:42.045 23:21:05 unittest.unittest_dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.045 23:21:05 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:05:42.304 23:21:05 unittest -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:05:42.304 23:21:05 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.304 23:21:05 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.304 23:21:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:42.304 ************************************ 00:05:42.304 START TEST unittest_init 00:05:42.304 ************************************ 00:05:42.304 23:21:05 unittest.unittest_init -- common/autotest_common.sh@1121 -- # unittest_init 00:05:42.304 23:21:05 unittest.unittest_init -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:05:42.304 00:05:42.304 00:05:42.304 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.304 http://cunit.sourceforge.net/ 00:05:42.304 00:05:42.304 00:05:42.304 Suite: subsystem_suite 00:05:42.304 Test: subsystem_sort_test_depends_on_single ...passed 00:05:42.304 Test: subsystem_sort_test_depends_on_multiple ...passed 00:05:42.304 Test: subsystem_sort_test_missing_dependency ...[2024-05-14 23:21:05.361794] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:05:42.304 [2024-05-14 23:21:05.362352] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:05:42.304 passed 00:05:42.304 00:05:42.304 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.304 suites 1 1 n/a 0 0 00:05:42.304 tests 3 3 3 0 0 00:05:42.304 asserts 20 20 20 0 n/a 00:05:42.304 00:05:42.304 Elapsed time = 0.000 seconds 00:05:42.304 00:05:42.304 real 0m0.025s 00:05:42.304 user 0m0.013s 00:05:42.304 sys 0m0.012s 00:05:42.304 23:21:05 unittest.unittest_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.304 23:21:05 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.304 ************************************ 00:05:42.304 END TEST unittest_init 00:05:42.304 ************************************ 00:05:42.304 23:21:05 unittest -- unit/unittest.sh@288 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:05:42.304 23:21:05 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.304 23:21:05 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.304 23:21:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:42.304 ************************************ 00:05:42.304 START TEST unittest_keyring 00:05:42.304 ************************************ 00:05:42.304 23:21:05 unittest.unittest_keyring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:05:42.304 00:05:42.304 00:05:42.304 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.304 http://cunit.sourceforge.net/ 00:05:42.304 00:05:42.304 00:05:42.304 Suite: keyring 00:05:42.304 Test: test_keyring_add_remove ...[2024-05-14 23:21:05.431308] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:05:42.304 [2024-05-14 23:21:05.431523] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:05:42.304 [2024-05-14 23:21:05.431682] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:05:42.304 passed 00:05:42.304 Test: test_keyring_get_put ...passed 00:05:42.304 00:05:42.304 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.304 suites 1 1 n/a 0 0 00:05:42.304 tests 2 2 2 0 0 00:05:42.304 asserts 44 44 44 0 n/a 00:05:42.304 00:05:42.304 Elapsed time = 0.000 seconds 00:05:42.304 00:05:42.304 real 0m0.026s 00:05:42.304 user 0m0.012s 00:05:42.304 sys 0m0.014s 00:05:42.304 ************************************ 00:05:42.304 END TEST unittest_keyring 00:05:42.304 ************************************ 00:05:42.304 23:21:05 unittest.unittest_keyring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.304 23:21:05 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:05:42.304 23:21:05 unittest -- unit/unittest.sh@290 -- # '[' yes = yes ']' 00:05:42.304 23:21:05 unittest -- unit/unittest.sh@290 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:42.304 23:21:05 unittest -- unit/unittest.sh@291 -- # hostname 00:05:42.304 23:21:05 unittest -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:05:42.633 geninfo: WARNING: invalid characters removed from testname! 00:06:21.372 23:21:38 unittest -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:06:21.372 23:21:43 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:23.272 23:21:46 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:26.555 23:21:49 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:29.838 23:21:52 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:32.369 23:21:55 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:34.906 23:21:58 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:37.438 23:22:00 unittest -- unit/unittest.sh@299 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:37.438 23:22:00 unittest -- unit/unittest.sh@300 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:38.375 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:38.375 Found 316 entries. 00:06:38.375 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:06:38.375 Writing .css and .png files. 00:06:38.375 Generating output. 00:06:38.375 Processing file include/linux/virtio_ring.h 00:06:38.634 Processing file include/spdk/util.h 00:06:38.634 Processing file include/spdk/endian.h 00:06:38.634 Processing file include/spdk/thread.h 00:06:38.634 Processing file include/spdk/nvme.h 00:06:38.634 Processing file include/spdk/histogram_data.h 00:06:38.634 Processing file include/spdk/nvme_spec.h 00:06:38.634 Processing file include/spdk/bdev_module.h 00:06:38.634 Processing file include/spdk/trace.h 00:06:38.634 Processing file include/spdk/mmio.h 00:06:38.634 Processing file include/spdk/nvmf_transport.h 00:06:38.634 Processing file include/spdk/base64.h 00:06:38.634 Processing file include/spdk_internal/rdma.h 00:06:38.634 Processing file include/spdk_internal/nvme_tcp.h 00:06:38.634 Processing file include/spdk_internal/sock.h 00:06:38.634 Processing file include/spdk_internal/utf.h 00:06:38.634 Processing file include/spdk_internal/sgl.h 00:06:38.634 Processing file include/spdk_internal/virtio.h 00:06:38.893 Processing file lib/accel/accel_sw.c 00:06:38.893 Processing file lib/accel/accel.c 00:06:38.893 Processing file lib/accel/accel_rpc.c 00:06:39.151 Processing file lib/bdev/bdev.c 00:06:39.151 Processing file lib/bdev/bdev_zone.c 00:06:39.151 Processing file lib/bdev/part.c 00:06:39.151 Processing file lib/bdev/bdev_rpc.c 00:06:39.151 Processing file lib/bdev/scsi_nvme.c 00:06:39.409 Processing file lib/blob/blob_bs_dev.c 00:06:39.409 Processing file lib/blob/blobstore.h 00:06:39.409 Processing file lib/blob/request.c 00:06:39.409 Processing file lib/blob/blobstore.c 00:06:39.409 Processing file lib/blob/zeroes.c 00:06:39.667 Processing file lib/blobfs/blobfs.c 00:06:39.667 Processing file lib/blobfs/tree.c 00:06:39.667 Processing file lib/conf/conf.c 00:06:39.667 Processing file lib/dma/dma.c 00:06:39.925 Processing file lib/env_dpdk/pci_virtio.c 00:06:39.925 Processing file lib/env_dpdk/pci_event.c 00:06:39.925 Processing file lib/env_dpdk/pci_vmd.c 00:06:39.925 Processing file lib/env_dpdk/pci_dpdk.c 00:06:39.925 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:06:39.925 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:06:39.925 Processing file lib/env_dpdk/pci_ioat.c 00:06:39.925 Processing file lib/env_dpdk/sigbus_handler.c 00:06:39.925 Processing file lib/env_dpdk/threads.c 00:06:39.925 Processing file lib/env_dpdk/pci_idxd.c 00:06:39.925 Processing file lib/env_dpdk/memory.c 00:06:39.925 Processing file lib/env_dpdk/pci.c 00:06:39.925 Processing file lib/env_dpdk/init.c 00:06:39.925 Processing file lib/env_dpdk/env.c 00:06:40.183 Processing file lib/event/app_rpc.c 00:06:40.183 Processing file lib/event/reactor.c 00:06:40.183 Processing file lib/event/app.c 00:06:40.183 Processing file lib/event/scheduler_static.c 00:06:40.183 Processing file lib/event/log_rpc.c 00:06:40.749 Processing file lib/ftl/ftl_debug.h 00:06:40.749 Processing file lib/ftl/ftl_debug.c 00:06:40.749 Processing file lib/ftl/ftl_core.c 00:06:40.749 Processing file lib/ftl/ftl_io.c 00:06:40.749 Processing file lib/ftl/ftl_core.h 00:06:40.749 Processing file lib/ftl/ftl_io.h 00:06:40.749 Processing file lib/ftl/ftl_band.h 00:06:40.749 Processing file lib/ftl/ftl_writer.c 00:06:40.749 Processing file lib/ftl/ftl_band.c 00:06:40.749 Processing file lib/ftl/ftl_trace.c 00:06:40.749 Processing file lib/ftl/ftl_writer.h 00:06:40.749 Processing file lib/ftl/ftl_sb.c 00:06:40.749 Processing file lib/ftl/ftl_p2l.c 00:06:40.749 Processing file lib/ftl/ftl_rq.c 00:06:40.749 Processing file lib/ftl/ftl_band_ops.c 00:06:40.749 Processing file lib/ftl/ftl_init.c 00:06:40.749 Processing file lib/ftl/ftl_nv_cache_io.h 00:06:40.749 Processing file lib/ftl/ftl_nv_cache.c 00:06:40.749 Processing file lib/ftl/ftl_nv_cache.h 00:06:40.749 Processing file lib/ftl/ftl_l2p_flat.c 00:06:40.749 Processing file lib/ftl/ftl_l2p.c 00:06:40.749 Processing file lib/ftl/ftl_reloc.c 00:06:40.749 Processing file lib/ftl/ftl_l2p_cache.c 00:06:40.749 Processing file lib/ftl/ftl_layout.c 00:06:40.749 Processing file lib/ftl/base/ftl_base_bdev.c 00:06:40.749 Processing file lib/ftl/base/ftl_base_dev.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:06:41.007 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:06:41.007 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:06:41.007 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:06:41.007 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:06:41.007 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:06:41.007 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:06:41.007 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:06:41.266 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:06:41.266 Processing file lib/ftl/utils/ftl_property.h 00:06:41.266 Processing file lib/ftl/utils/ftl_bitmap.c 00:06:41.266 Processing file lib/ftl/utils/ftl_conf.c 00:06:41.266 Processing file lib/ftl/utils/ftl_df.h 00:06:41.266 Processing file lib/ftl/utils/ftl_md.c 00:06:41.266 Processing file lib/ftl/utils/ftl_addr_utils.h 00:06:41.266 Processing file lib/ftl/utils/ftl_mempool.c 00:06:41.266 Processing file lib/ftl/utils/ftl_property.c 00:06:41.524 Processing file lib/idxd/idxd.c 00:06:41.524 Processing file lib/idxd/idxd_user.c 00:06:41.524 Processing file lib/idxd/idxd_internal.h 00:06:41.524 Processing file lib/init/subsystem_rpc.c 00:06:41.524 Processing file lib/init/rpc.c 00:06:41.524 Processing file lib/init/json_config.c 00:06:41.524 Processing file lib/init/subsystem.c 00:06:41.524 Processing file lib/ioat/ioat_internal.h 00:06:41.524 Processing file lib/ioat/ioat.c 00:06:42.090 Processing file lib/iscsi/init_grp.c 00:06:42.090 Processing file lib/iscsi/task.h 00:06:42.090 Processing file lib/iscsi/iscsi_subsystem.c 00:06:42.090 Processing file lib/iscsi/conn.c 00:06:42.090 Processing file lib/iscsi/tgt_node.c 00:06:42.090 Processing file lib/iscsi/iscsi_rpc.c 00:06:42.090 Processing file lib/iscsi/portal_grp.c 00:06:42.090 Processing file lib/iscsi/iscsi.h 00:06:42.090 Processing file lib/iscsi/param.c 00:06:42.090 Processing file lib/iscsi/iscsi.c 00:06:42.090 Processing file lib/iscsi/md5.c 00:06:42.090 Processing file lib/iscsi/task.c 00:06:42.090 Processing file lib/json/json_parse.c 00:06:42.090 Processing file lib/json/json_util.c 00:06:42.090 Processing file lib/json/json_write.c 00:06:42.090 Processing file lib/jsonrpc/jsonrpc_server.c 00:06:42.090 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:06:42.090 Processing file lib/jsonrpc/jsonrpc_client.c 00:06:42.090 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:06:42.347 Processing file lib/keyring/keyring_rpc.c 00:06:42.347 Processing file lib/keyring/keyring.c 00:06:42.347 Processing file lib/log/log_flags.c 00:06:42.347 Processing file lib/log/log_deprecated.c 00:06:42.347 Processing file lib/log/log.c 00:06:42.347 Processing file lib/lvol/lvol.c 00:06:42.347 Processing file lib/nbd/nbd.c 00:06:42.347 Processing file lib/nbd/nbd_rpc.c 00:06:42.638 Processing file lib/notify/notify_rpc.c 00:06:42.638 Processing file lib/notify/notify.c 00:06:43.204 Processing file lib/nvme/nvme_cuse.c 00:06:43.204 Processing file lib/nvme/nvme_ctrlr.c 00:06:43.204 Processing file lib/nvme/nvme_poll_group.c 00:06:43.204 Processing file lib/nvme/nvme_stubs.c 00:06:43.204 Processing file lib/nvme/nvme_ns_cmd.c 00:06:43.204 Processing file lib/nvme/nvme_tcp.c 00:06:43.204 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:06:43.204 Processing file lib/nvme/nvme_discovery.c 00:06:43.204 Processing file lib/nvme/nvme_fabric.c 00:06:43.204 Processing file lib/nvme/nvme_opal.c 00:06:43.204 Processing file lib/nvme/nvme_transport.c 00:06:43.204 Processing file lib/nvme/nvme_ns.c 00:06:43.204 Processing file lib/nvme/nvme_pcie_common.c 00:06:43.204 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:06:43.204 Processing file lib/nvme/nvme_io_msg.c 00:06:43.204 Processing file lib/nvme/nvme_pcie_internal.h 00:06:43.204 Processing file lib/nvme/nvme_auth.c 00:06:43.204 Processing file lib/nvme/nvme.c 00:06:43.204 Processing file lib/nvme/nvme_pcie.c 00:06:43.204 Processing file lib/nvme/nvme_internal.h 00:06:43.204 Processing file lib/nvme/nvme_zns.c 00:06:43.204 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:06:43.204 Processing file lib/nvme/nvme_rdma.c 00:06:43.204 Processing file lib/nvme/nvme_qpair.c 00:06:43.204 Processing file lib/nvme/nvme_quirks.c 00:06:44.137 Processing file lib/nvmf/nvmf.c 00:06:44.137 Processing file lib/nvmf/nvmf_internal.h 00:06:44.137 Processing file lib/nvmf/stubs.c 00:06:44.137 Processing file lib/nvmf/nvmf_rpc.c 00:06:44.137 Processing file lib/nvmf/ctrlr.c 00:06:44.137 Processing file lib/nvmf/auth.c 00:06:44.137 Processing file lib/nvmf/subsystem.c 00:06:44.137 Processing file lib/nvmf/tcp.c 00:06:44.137 Processing file lib/nvmf/transport.c 00:06:44.137 Processing file lib/nvmf/ctrlr_bdev.c 00:06:44.137 Processing file lib/nvmf/rdma.c 00:06:44.137 Processing file lib/nvmf/ctrlr_discovery.c 00:06:44.137 Processing file lib/rdma/common.c 00:06:44.137 Processing file lib/rdma/rdma_verbs.c 00:06:44.137 Processing file lib/rpc/rpc.c 00:06:44.396 Processing file lib/scsi/port.c 00:06:44.396 Processing file lib/scsi/scsi_bdev.c 00:06:44.396 Processing file lib/scsi/lun.c 00:06:44.396 Processing file lib/scsi/scsi_pr.c 00:06:44.396 Processing file lib/scsi/task.c 00:06:44.396 Processing file lib/scsi/dev.c 00:06:44.396 Processing file lib/scsi/scsi.c 00:06:44.396 Processing file lib/scsi/scsi_rpc.c 00:06:44.396 Processing file lib/sock/sock_rpc.c 00:06:44.396 Processing file lib/sock/sock.c 00:06:44.396 Processing file lib/thread/thread.c 00:06:44.396 Processing file lib/thread/iobuf.c 00:06:44.655 Processing file lib/trace/trace_rpc.c 00:06:44.655 Processing file lib/trace/trace_flags.c 00:06:44.655 Processing file lib/trace/trace.c 00:06:44.655 Processing file lib/trace_parser/trace.cpp 00:06:44.655 Processing file lib/ut/ut.c 00:06:44.655 Processing file lib/ut_mock/mock.c 00:06:45.223 Processing file lib/util/string.c 00:06:45.223 Processing file lib/util/strerror_tls.c 00:06:45.223 Processing file lib/util/hexlify.c 00:06:45.223 Processing file lib/util/uuid.c 00:06:45.223 Processing file lib/util/fd_group.c 00:06:45.223 Processing file lib/util/crc16.c 00:06:45.223 Processing file lib/util/xor.c 00:06:45.223 Processing file lib/util/math.c 00:06:45.223 Processing file lib/util/dif.c 00:06:45.223 Processing file lib/util/bit_array.c 00:06:45.223 Processing file lib/util/fd.c 00:06:45.223 Processing file lib/util/iov.c 00:06:45.223 Processing file lib/util/crc64.c 00:06:45.223 Processing file lib/util/cpuset.c 00:06:45.223 Processing file lib/util/zipf.c 00:06:45.223 Processing file lib/util/crc32.c 00:06:45.224 Processing file lib/util/crc32c.c 00:06:45.224 Processing file lib/util/crc32_ieee.c 00:06:45.224 Processing file lib/util/file.c 00:06:45.224 Processing file lib/util/pipe.c 00:06:45.224 Processing file lib/util/base64.c 00:06:45.224 Processing file lib/vfio_user/host/vfio_user_pci.c 00:06:45.224 Processing file lib/vfio_user/host/vfio_user.c 00:06:45.482 Processing file lib/vhost/rte_vhost_user.c 00:06:45.483 Processing file lib/vhost/vhost_rpc.c 00:06:45.483 Processing file lib/vhost/vhost_blk.c 00:06:45.483 Processing file lib/vhost/vhost_scsi.c 00:06:45.483 Processing file lib/vhost/vhost.c 00:06:45.483 Processing file lib/vhost/vhost_internal.h 00:06:45.483 Processing file lib/virtio/virtio_vfio_user.c 00:06:45.483 Processing file lib/virtio/virtio.c 00:06:45.483 Processing file lib/virtio/virtio_pci.c 00:06:45.483 Processing file lib/virtio/virtio_vhost_user.c 00:06:45.741 Processing file lib/vmd/vmd.c 00:06:45.741 Processing file lib/vmd/led.c 00:06:45.741 Processing file module/accel/dsa/accel_dsa.c 00:06:45.741 Processing file module/accel/dsa/accel_dsa_rpc.c 00:06:45.741 Processing file module/accel/error/accel_error_rpc.c 00:06:45.741 Processing file module/accel/error/accel_error.c 00:06:45.741 Processing file module/accel/iaa/accel_iaa.c 00:06:45.741 Processing file module/accel/iaa/accel_iaa_rpc.c 00:06:46.000 Processing file module/accel/ioat/accel_ioat.c 00:06:46.000 Processing file module/accel/ioat/accel_ioat_rpc.c 00:06:46.000 Processing file module/bdev/aio/bdev_aio.c 00:06:46.000 Processing file module/bdev/aio/bdev_aio_rpc.c 00:06:46.000 Processing file module/bdev/daos/bdev_daos_rpc.c 00:06:46.000 Processing file module/bdev/daos/bdev_daos.c 00:06:46.000 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:06:46.000 Processing file module/bdev/delay/vbdev_delay.c 00:06:46.258 Processing file module/bdev/error/vbdev_error_rpc.c 00:06:46.258 Processing file module/bdev/error/vbdev_error.c 00:06:46.258 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:06:46.258 Processing file module/bdev/ftl/bdev_ftl.c 00:06:46.258 Processing file module/bdev/gpt/vbdev_gpt.c 00:06:46.258 Processing file module/bdev/gpt/gpt.c 00:06:46.258 Processing file module/bdev/gpt/gpt.h 00:06:46.517 Processing file module/bdev/lvol/vbdev_lvol.c 00:06:46.517 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:06:46.517 Processing file module/bdev/malloc/bdev_malloc.c 00:06:46.517 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:06:46.517 Processing file module/bdev/null/bdev_null_rpc.c 00:06:46.517 Processing file module/bdev/null/bdev_null.c 00:06:47.087 Processing file module/bdev/nvme/bdev_mdns_client.c 00:06:47.087 Processing file module/bdev/nvme/bdev_nvme.c 00:06:47.087 Processing file module/bdev/nvme/vbdev_opal.c 00:06:47.087 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:06:47.087 Processing file module/bdev/nvme/nvme_rpc.c 00:06:47.087 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:06:47.087 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:06:47.087 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:06:47.087 Processing file module/bdev/passthru/vbdev_passthru.c 00:06:47.345 Processing file module/bdev/raid/raid0.c 00:06:47.345 Processing file module/bdev/raid/bdev_raid_rpc.c 00:06:47.345 Processing file module/bdev/raid/bdev_raid.h 00:06:47.345 Processing file module/bdev/raid/concat.c 00:06:47.345 Processing file module/bdev/raid/raid1.c 00:06:47.345 Processing file module/bdev/raid/bdev_raid_sb.c 00:06:47.345 Processing file module/bdev/raid/bdev_raid.c 00:06:47.345 Processing file module/bdev/split/vbdev_split.c 00:06:47.345 Processing file module/bdev/split/vbdev_split_rpc.c 00:06:47.604 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:06:47.604 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:06:47.604 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:06:47.604 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:06:47.604 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:06:47.604 Processing file module/blob/bdev/blob_bdev.c 00:06:47.604 Processing file module/blobfs/bdev/blobfs_bdev.c 00:06:47.604 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:06:47.862 Processing file module/env_dpdk/env_dpdk_rpc.c 00:06:47.862 Processing file module/event/subsystems/accel/accel.c 00:06:47.862 Processing file module/event/subsystems/bdev/bdev.c 00:06:47.862 Processing file module/event/subsystems/iobuf/iobuf.c 00:06:47.862 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:06:47.862 Processing file module/event/subsystems/iscsi/iscsi.c 00:06:48.121 Processing file module/event/subsystems/keyring/keyring.c 00:06:48.121 Processing file module/event/subsystems/nbd/nbd.c 00:06:48.121 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:06:48.121 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:06:48.121 Processing file module/event/subsystems/scheduler/scheduler.c 00:06:48.121 Processing file module/event/subsystems/scsi/scsi.c 00:06:48.379 Processing file module/event/subsystems/sock/sock.c 00:06:48.379 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:06:48.379 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:06:48.379 Processing file module/event/subsystems/vmd/vmd.c 00:06:48.379 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:06:48.379 Processing file module/keyring/file/keyring_rpc.c 00:06:48.379 Processing file module/keyring/file/keyring.c 00:06:48.637 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:06:48.637 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:06:48.637 Processing file module/scheduler/gscheduler/gscheduler.c 00:06:48.637 Processing file module/sock/sock_kernel.h 00:06:48.896 Processing file module/sock/posix/posix.c 00:06:48.896 Writing directory view page. 00:06:48.896 Overall coverage rate: 00:06:48.896 lines......: 38.7% (39612 of 102303 lines) 00:06:48.896 functions..: 42.3% (3618 of 8546 functions) 00:06:48.896 00:06:48.896 00:06:48.896 ===================== 00:06:48.896 All unit tests passed 00:06:48.896 ===================== 00:06:48.896 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:48.896 23:22:11 unittest -- unit/unittest.sh@303 -- # set +x 00:06:48.896 00:06:48.896 00:06:48.896 00:06:48.896 real 2m42.971s 00:06:48.896 user 2m19.937s 00:06:48.896 sys 0m13.241s 00:06:48.896 23:22:11 unittest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.896 23:22:11 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:48.896 ************************************ 00:06:48.896 END TEST unittest 00:06:48.896 ************************************ 00:06:48.896 23:22:12 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:48.896 23:22:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:48.896 23:22:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:48.896 23:22:12 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:48.896 23:22:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:48.896 23:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:48.896 23:22:12 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:48.896 23:22:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.896 23:22:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.896 23:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:48.896 ************************************ 00:06:48.896 START TEST env 00:06:48.896 ************************************ 00:06:48.896 23:22:12 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:48.896 * Looking for test storage... 00:06:48.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:48.896 23:22:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:48.896 23:22:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.896 23:22:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.896 23:22:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.896 ************************************ 00:06:48.896 START TEST env_memory 00:06:48.896 ************************************ 00:06:48.896 23:22:12 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:48.896 00:06:48.896 00:06:48.896 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.896 http://cunit.sourceforge.net/ 00:06:48.896 00:06:48.896 00:06:48.896 Suite: memory 00:06:48.896 Test: alloc and free memory map ...[2024-05-14 23:22:12.161678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:49.156 passed 00:06:49.156 Test: mem map translation ...[2024-05-14 23:22:12.192618] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:49.156 [2024-05-14 23:22:12.192726] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:49.156 [2024-05-14 23:22:12.192801] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:49.156 [2024-05-14 23:22:12.192871] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:49.156 passed 00:06:49.156 Test: mem map registration ...[2024-05-14 23:22:12.233249] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:49.156 [2024-05-14 23:22:12.233345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:49.156 passed 00:06:49.156 Test: mem map adjacent registrations ...passed 00:06:49.156 00:06:49.156 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.156 suites 1 1 n/a 0 0 00:06:49.156 tests 4 4 4 0 0 00:06:49.156 asserts 152 152 152 0 n/a 00:06:49.156 00:06:49.156 Elapsed time = 0.150 seconds 00:06:49.156 00:06:49.156 real 0m0.179s 00:06:49.156 user 0m0.160s 00:06:49.156 sys 0m0.019s 00:06:49.156 23:22:12 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.156 ************************************ 00:06:49.156 END TEST env_memory 00:06:49.156 ************************************ 00:06:49.156 23:22:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:49.156 23:22:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:49.156 23:22:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.156 23:22:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.156 23:22:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:49.156 ************************************ 00:06:49.156 START TEST env_vtophys 00:06:49.156 ************************************ 00:06:49.156 23:22:12 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:49.415 EAL: lib.eal log level changed from notice to debug 00:06:49.415 EAL: Detected lcore 0 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 1 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 2 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 3 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 4 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 5 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 6 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 7 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 8 as core 0 on socket 0 00:06:49.415 EAL: Detected lcore 9 as core 0 on socket 0 00:06:49.415 EAL: Maximum logical cores by configuration: 128 00:06:49.415 EAL: Detected CPU lcores: 10 00:06:49.415 EAL: Detected NUMA nodes: 1 00:06:49.415 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:49.415 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:49.415 EAL: Checking presence of .so 'librte_eal.so' 00:06:49.415 EAL: Detected static linkage of DPDK 00:06:49.415 EAL: No shared files mode enabled, IPC will be disabled 00:06:49.415 EAL: Selected IOVA mode 'PA' 00:06:49.415 EAL: Probing VFIO support... 00:06:49.415 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:49.415 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:49.415 EAL: Ask a virtual area of 0x2e000 bytes 00:06:49.415 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:49.415 EAL: Setting up physically contiguous memory... 00:06:49.415 EAL: Setting maximum number of open files to 4096 00:06:49.415 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:49.415 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:49.415 EAL: Ask a virtual area of 0x61000 bytes 00:06:49.415 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:49.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:49.415 EAL: Ask a virtual area of 0x400000000 bytes 00:06:49.415 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:49.415 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:49.415 EAL: Ask a virtual area of 0x61000 bytes 00:06:49.415 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:49.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:49.415 EAL: Ask a virtual area of 0x400000000 bytes 00:06:49.415 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:49.415 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:49.415 EAL: Ask a virtual area of 0x61000 bytes 00:06:49.415 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:49.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:49.415 EAL: Ask a virtual area of 0x400000000 bytes 00:06:49.415 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:49.415 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:49.415 EAL: Ask a virtual area of 0x61000 bytes 00:06:49.415 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:49.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:49.415 EAL: Ask a virtual area of 0x400000000 bytes 00:06:49.415 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:49.415 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:49.415 EAL: Hugepages will be freed exactly as allocated. 00:06:49.415 EAL: No shared files mode enabled, IPC is disabled 00:06:49.415 EAL: No shared files mode enabled, IPC is disabled 00:06:49.415 EAL: TSC frequency is ~2200000 KHz 00:06:49.415 EAL: Main lcore 0 is ready (tid=7f550c91d180;cpuset=[0]) 00:06:49.415 EAL: Trying to obtain current memory policy. 00:06:49.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:49.415 EAL: Restoring previous memory policy: 0 00:06:49.415 EAL: request: mp_malloc_sync 00:06:49.415 EAL: No shared files mode enabled, IPC is disabled 00:06:49.415 EAL: Heap on socket 0 was expanded by 2MB 00:06:49.415 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:49.415 EAL: Mem event callback 'spdk:(nil)' registered 00:06:49.415 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:49.415 00:06:49.415 00:06:49.415 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.415 http://cunit.sourceforge.net/ 00:06:49.415 00:06:49.415 00:06:49.415 Suite: components_suite 00:06:49.982 Test: vtophys_malloc_test ...passed 00:06:49.982 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:49.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:49.982 EAL: Restoring previous memory policy: 0 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was expanded by 4MB 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was shrunk by 4MB 00:06:49.982 EAL: Trying to obtain current memory policy. 00:06:49.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:49.982 EAL: Restoring previous memory policy: 0 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was expanded by 6MB 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was shrunk by 6MB 00:06:49.982 EAL: Trying to obtain current memory policy. 00:06:49.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:49.982 EAL: Restoring previous memory policy: 0 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was expanded by 10MB 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was shrunk by 10MB 00:06:49.982 EAL: Trying to obtain current memory policy. 00:06:49.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:49.982 EAL: Restoring previous memory policy: 0 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was expanded by 18MB 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was shrunk by 18MB 00:06:49.982 EAL: Trying to obtain current memory policy. 00:06:49.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:49.982 EAL: Restoring previous memory policy: 0 00:06:49.982 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.982 EAL: request: mp_malloc_sync 00:06:49.982 EAL: No shared files mode enabled, IPC is disabled 00:06:49.982 EAL: Heap on socket 0 was expanded by 34MB 00:06:50.240 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.240 EAL: request: mp_malloc_sync 00:06:50.240 EAL: No shared files mode enabled, IPC is disabled 00:06:50.240 EAL: Heap on socket 0 was shrunk by 34MB 00:06:50.240 EAL: Trying to obtain current memory policy. 00:06:50.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.240 EAL: Restoring previous memory policy: 0 00:06:50.240 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.240 EAL: request: mp_malloc_sync 00:06:50.240 EAL: No shared files mode enabled, IPC is disabled 00:06:50.240 EAL: Heap on socket 0 was expanded by 66MB 00:06:50.240 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.240 EAL: request: mp_malloc_sync 00:06:50.240 EAL: No shared files mode enabled, IPC is disabled 00:06:50.240 EAL: Heap on socket 0 was shrunk by 66MB 00:06:50.497 EAL: Trying to obtain current memory policy. 00:06:50.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.497 EAL: Restoring previous memory policy: 0 00:06:50.497 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.497 EAL: request: mp_malloc_sync 00:06:50.497 EAL: No shared files mode enabled, IPC is disabled 00:06:50.497 EAL: Heap on socket 0 was expanded by 130MB 00:06:50.755 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.755 EAL: request: mp_malloc_sync 00:06:50.755 EAL: No shared files mode enabled, IPC is disabled 00:06:50.755 EAL: Heap on socket 0 was shrunk by 130MB 00:06:50.755 EAL: Trying to obtain current memory policy. 00:06:50.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.013 EAL: Restoring previous memory policy: 0 00:06:51.013 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.013 EAL: request: mp_malloc_sync 00:06:51.013 EAL: No shared files mode enabled, IPC is disabled 00:06:51.013 EAL: Heap on socket 0 was expanded by 258MB 00:06:51.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.272 EAL: request: mp_malloc_sync 00:06:51.272 EAL: No shared files mode enabled, IPC is disabled 00:06:51.272 EAL: Heap on socket 0 was shrunk by 258MB 00:06:51.839 EAL: Trying to obtain current memory policy. 00:06:51.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.839 EAL: Restoring previous memory policy: 0 00:06:51.839 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.839 EAL: request: mp_malloc_sync 00:06:51.839 EAL: No shared files mode enabled, IPC is disabled 00:06:51.839 EAL: Heap on socket 0 was expanded by 514MB 00:06:52.773 EAL: Calling mem event callback 'spdk:(nil)' 00:06:52.773 EAL: request: mp_malloc_sync 00:06:52.773 EAL: No shared files mode enabled, IPC is disabled 00:06:52.773 EAL: Heap on socket 0 was shrunk by 514MB 00:06:53.707 EAL: Trying to obtain current memory policy. 00:06:53.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.966 EAL: Restoring previous memory policy: 0 00:06:53.966 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.966 EAL: request: mp_malloc_sync 00:06:53.966 EAL: No shared files mode enabled, IPC is disabled 00:06:53.966 EAL: Heap on socket 0 was expanded by 1026MB 00:06:55.874 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.874 EAL: request: mp_malloc_sync 00:06:55.874 EAL: No shared files mode enabled, IPC is disabled 00:06:55.874 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:57.249 passed 00:06:57.249 00:06:57.249 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.249 suites 1 1 n/a 0 0 00:06:57.249 tests 2 2 2 0 0 00:06:57.249 asserts 6713 6713 6713 0 n/a 00:06:57.249 00:06:57.249 Elapsed time = 7.730 seconds 00:06:57.249 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.249 EAL: request: mp_malloc_sync 00:06:57.249 EAL: No shared files mode enabled, IPC is disabled 00:06:57.249 EAL: Heap on socket 0 was shrunk by 2MB 00:06:57.249 EAL: No shared files mode enabled, IPC is disabled 00:06:57.249 EAL: No shared files mode enabled, IPC is disabled 00:06:57.249 EAL: No shared files mode enabled, IPC is disabled 00:06:57.249 ************************************ 00:06:57.249 END TEST env_vtophys 00:06:57.249 ************************************ 00:06:57.249 00:06:57.249 real 0m8.086s 00:06:57.249 user 0m6.848s 00:06:57.249 sys 0m1.025s 00:06:57.249 23:22:20 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.249 23:22:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:57.249 23:22:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:57.249 23:22:20 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.249 23:22:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.249 23:22:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.249 ************************************ 00:06:57.249 START TEST env_pci 00:06:57.249 ************************************ 00:06:57.249 23:22:20 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:57.249 00:06:57.249 00:06:57.249 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.249 http://cunit.sourceforge.net/ 00:06:57.249 00:06:57.249 00:06:57.249 Suite: pci 00:06:57.249 Test: pci_hook ...[2024-05-14 23:22:20.512895] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 45816 has claimed it 00:06:57.507 passed 00:06:57.507 00:06:57.507 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.507 suites 1 1 n/a 0 0 00:06:57.507 tests 1 1 1 0 0 00:06:57.507 asserts 25 25 25 0 n/a 00:06:57.507 00:06:57.507 Elapsed time = 0.000 seconds 00:06:57.507 EAL: Cannot find device (10000:00:01.0) 00:06:57.507 EAL: Failed to attach device on primary process 00:06:57.507 ************************************ 00:06:57.507 END TEST env_pci 00:06:57.507 ************************************ 00:06:57.507 00:06:57.507 real 0m0.069s 00:06:57.507 user 0m0.037s 00:06:57.507 sys 0m0.033s 00:06:57.507 23:22:20 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.507 23:22:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:57.507 23:22:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:57.507 23:22:20 env -- env/env.sh@15 -- # uname 00:06:57.507 23:22:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:57.507 23:22:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:57.507 23:22:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:57.507 23:22:20 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:57.507 23:22:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.507 23:22:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.507 ************************************ 00:06:57.507 START TEST env_dpdk_post_init 00:06:57.507 ************************************ 00:06:57.507 23:22:20 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:57.507 EAL: Detected CPU lcores: 10 00:06:57.507 EAL: Detected NUMA nodes: 1 00:06:57.507 EAL: Detected static linkage of DPDK 00:06:57.507 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:57.507 EAL: Selected IOVA mode 'PA' 00:06:57.765 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:57.765 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket 0) 00:06:57.765 Starting DPDK initialization... 00:06:57.765 Starting SPDK post initialization... 00:06:57.765 SPDK NVMe probe 00:06:57.765 Attaching to 0000:00:10.0 00:06:57.765 Attached to 0000:00:10.0 00:06:57.765 Cleaning up... 00:06:57.765 00:06:57.765 real 0m0.325s 00:06:57.765 user 0m0.069s 00:06:57.765 sys 0m0.060s 00:06:57.765 23:22:20 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.765 23:22:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:57.765 ************************************ 00:06:57.765 END TEST env_dpdk_post_init 00:06:57.765 ************************************ 00:06:57.765 23:22:20 env -- env/env.sh@26 -- # uname 00:06:57.765 23:22:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:57.765 23:22:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:57.765 23:22:20 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.765 23:22:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.765 23:22:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.765 ************************************ 00:06:57.765 START TEST env_mem_callbacks 00:06:57.765 ************************************ 00:06:57.765 23:22:20 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:57.765 EAL: Detected CPU lcores: 10 00:06:57.765 EAL: Detected NUMA nodes: 1 00:06:57.765 EAL: Detected static linkage of DPDK 00:06:58.024 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:58.024 EAL: Selected IOVA mode 'PA' 00:06:58.024 00:06:58.024 00:06:58.024 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.024 http://cunit.sourceforge.net/ 00:06:58.024 00:06:58.024 00:06:58.024 Suite: memory 00:06:58.024 Test: test ... 00:06:58.024 register 0x200000200000 2097152 00:06:58.024 malloc 3145728 00:06:58.024 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:58.024 register 0x200000400000 4194304 00:06:58.024 buf 0x2000004fffc0 len 3145728 PASSED 00:06:58.024 malloc 64 00:06:58.024 buf 0x2000004ffec0 len 64 PASSED 00:06:58.024 malloc 4194304 00:06:58.024 register 0x200000800000 6291456 00:06:58.024 buf 0x2000009fffc0 len 4194304 PASSED 00:06:58.024 free 0x2000004fffc0 3145728 00:06:58.024 free 0x2000004ffec0 64 00:06:58.024 unregister 0x200000400000 4194304 PASSED 00:06:58.024 free 0x2000009fffc0 4194304 00:06:58.024 unregister 0x200000800000 6291456 PASSED 00:06:58.024 malloc 8388608 00:06:58.024 register 0x200000400000 10485760 00:06:58.024 buf 0x2000005fffc0 len 8388608 PASSED 00:06:58.024 free 0x2000005fffc0 8388608 00:06:58.024 unregister 0x200000400000 10485760 PASSED 00:06:58.024 passed 00:06:58.024 00:06:58.024 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.024 suites 1 1 n/a 0 0 00:06:58.024 tests 1 1 1 0 0 00:06:58.024 asserts 15 15 15 0 n/a 00:06:58.024 00:06:58.024 Elapsed time = 0.050 seconds 00:06:58.024 ************************************ 00:06:58.024 END TEST env_mem_callbacks 00:06:58.024 ************************************ 00:06:58.024 00:06:58.024 real 0m0.263s 00:06:58.024 user 0m0.092s 00:06:58.024 sys 0m0.069s 00:06:58.024 23:22:21 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.024 23:22:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:58.024 00:06:58.024 real 0m9.261s 00:06:58.024 user 0m7.328s 00:06:58.024 sys 0m1.408s 00:06:58.024 23:22:21 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.024 ************************************ 00:06:58.024 23:22:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:58.024 END TEST env 00:06:58.024 ************************************ 00:06:58.283 23:22:21 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:58.283 23:22:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.283 23:22:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.283 23:22:21 -- common/autotest_common.sh@10 -- # set +x 00:06:58.283 ************************************ 00:06:58.283 START TEST rpc 00:06:58.283 ************************************ 00:06:58.283 23:22:21 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:58.283 * Looking for test storage... 00:06:58.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:58.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.283 23:22:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45958 00:06:58.283 23:22:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.283 23:22:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45958 00:06:58.283 23:22:21 rpc -- common/autotest_common.sh@827 -- # '[' -z 45958 ']' 00:06:58.283 23:22:21 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.283 23:22:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:58.283 23:22:21 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.283 23:22:21 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.283 23:22:21 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.283 23:22:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.542 [2024-05-14 23:22:21.581837] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:06:58.542 [2024-05-14 23:22:21.582039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45958 ] 00:06:58.542 [2024-05-14 23:22:21.752286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.801 [2024-05-14 23:22:21.961284] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:58.801 [2024-05-14 23:22:21.961355] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45958' to capture a snapshot of events at runtime. 00:06:58.801 [2024-05-14 23:22:21.961405] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.801 [2024-05-14 23:22:21.961431] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.801 [2024-05-14 23:22:21.961469] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid45958 for offline analysis/debug. 00:06:58.801 [2024-05-14 23:22:21.961523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.735 23:22:22 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.735 23:22:22 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:59.735 23:22:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:59.735 23:22:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:59.735 23:22:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:59.735 23:22:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:59.735 23:22:22 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.735 23:22:22 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.735 23:22:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.735 ************************************ 00:06:59.735 START TEST rpc_integrity 00:06:59.735 ************************************ 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:59.735 { 00:06:59.735 "name": "Malloc0", 00:06:59.735 "aliases": [ 00:06:59.735 "36ec930a-2486-4c2c-85eb-205ab4bd828a" 00:06:59.735 ], 00:06:59.735 "product_name": "Malloc disk", 00:06:59.735 "block_size": 512, 00:06:59.735 "num_blocks": 16384, 00:06:59.735 "uuid": "36ec930a-2486-4c2c-85eb-205ab4bd828a", 00:06:59.735 "assigned_rate_limits": { 00:06:59.735 "rw_ios_per_sec": 0, 00:06:59.735 "rw_mbytes_per_sec": 0, 00:06:59.735 "r_mbytes_per_sec": 0, 00:06:59.735 "w_mbytes_per_sec": 0 00:06:59.735 }, 00:06:59.735 "claimed": false, 00:06:59.735 "zoned": false, 00:06:59.735 "supported_io_types": { 00:06:59.735 "read": true, 00:06:59.735 "write": true, 00:06:59.735 "unmap": true, 00:06:59.735 "write_zeroes": true, 00:06:59.735 "flush": true, 00:06:59.735 "reset": true, 00:06:59.735 "compare": false, 00:06:59.735 "compare_and_write": false, 00:06:59.735 "abort": true, 00:06:59.735 "nvme_admin": false, 00:06:59.735 "nvme_io": false 00:06:59.735 }, 00:06:59.735 "memory_domains": [ 00:06:59.735 { 00:06:59.735 "dma_device_id": "system", 00:06:59.735 "dma_device_type": 1 00:06:59.735 }, 00:06:59.735 { 00:06:59.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.735 "dma_device_type": 2 00:06:59.735 } 00:06:59.735 ], 00:06:59.735 "driver_specific": {} 00:06:59.735 } 00:06:59.735 ]' 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.735 [2024-05-14 23:22:22.921509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:59.735 [2024-05-14 23:22:22.921647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.735 [2024-05-14 23:22:22.921702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028880 00:06:59.735 [2024-05-14 23:22:22.921741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.735 [2024-05-14 23:22:22.923734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.735 [2024-05-14 23:22:22.923791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:59.735 Passthru0 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.735 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.735 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:59.735 { 00:06:59.735 "name": "Malloc0", 00:06:59.735 "aliases": [ 00:06:59.735 "36ec930a-2486-4c2c-85eb-205ab4bd828a" 00:06:59.735 ], 00:06:59.735 "product_name": "Malloc disk", 00:06:59.735 "block_size": 512, 00:06:59.735 "num_blocks": 16384, 00:06:59.735 "uuid": "36ec930a-2486-4c2c-85eb-205ab4bd828a", 00:06:59.735 "assigned_rate_limits": { 00:06:59.735 "rw_ios_per_sec": 0, 00:06:59.735 "rw_mbytes_per_sec": 0, 00:06:59.735 "r_mbytes_per_sec": 0, 00:06:59.736 "w_mbytes_per_sec": 0 00:06:59.736 }, 00:06:59.736 "claimed": true, 00:06:59.736 "claim_type": "exclusive_write", 00:06:59.736 "zoned": false, 00:06:59.736 "supported_io_types": { 00:06:59.736 "read": true, 00:06:59.736 "write": true, 00:06:59.736 "unmap": true, 00:06:59.736 "write_zeroes": true, 00:06:59.736 "flush": true, 00:06:59.736 "reset": true, 00:06:59.736 "compare": false, 00:06:59.736 "compare_and_write": false, 00:06:59.736 "abort": true, 00:06:59.736 "nvme_admin": false, 00:06:59.736 "nvme_io": false 00:06:59.736 }, 00:06:59.736 "memory_domains": [ 00:06:59.736 { 00:06:59.736 "dma_device_id": "system", 00:06:59.736 "dma_device_type": 1 00:06:59.736 }, 00:06:59.736 { 00:06:59.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.736 "dma_device_type": 2 00:06:59.736 } 00:06:59.736 ], 00:06:59.736 "driver_specific": {} 00:06:59.736 }, 00:06:59.736 { 00:06:59.736 "name": "Passthru0", 00:06:59.736 "aliases": [ 00:06:59.736 "b3455576-0ac5-5126-bbc0-a2784bd0f598" 00:06:59.736 ], 00:06:59.736 "product_name": "passthru", 00:06:59.736 "block_size": 512, 00:06:59.736 "num_blocks": 16384, 00:06:59.736 "uuid": "b3455576-0ac5-5126-bbc0-a2784bd0f598", 00:06:59.736 "assigned_rate_limits": { 00:06:59.736 "rw_ios_per_sec": 0, 00:06:59.736 "rw_mbytes_per_sec": 0, 00:06:59.736 "r_mbytes_per_sec": 0, 00:06:59.736 "w_mbytes_per_sec": 0 00:06:59.736 }, 00:06:59.736 "claimed": false, 00:06:59.736 "zoned": false, 00:06:59.736 "supported_io_types": { 00:06:59.736 "read": true, 00:06:59.736 "write": true, 00:06:59.736 "unmap": true, 00:06:59.736 "write_zeroes": true, 00:06:59.736 "flush": true, 00:06:59.736 "reset": true, 00:06:59.736 "compare": false, 00:06:59.736 "compare_and_write": false, 00:06:59.736 "abort": true, 00:06:59.736 "nvme_admin": false, 00:06:59.736 "nvme_io": false 00:06:59.736 }, 00:06:59.736 "memory_domains": [ 00:06:59.736 { 00:06:59.736 "dma_device_id": "system", 00:06:59.736 "dma_device_type": 1 00:06:59.736 }, 00:06:59.736 { 00:06:59.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.736 "dma_device_type": 2 00:06:59.736 } 00:06:59.736 ], 00:06:59.736 "driver_specific": { 00:06:59.736 "passthru": { 00:06:59.736 "name": "Passthru0", 00:06:59.736 "base_bdev_name": "Malloc0" 00:06:59.736 } 00:06:59.736 } 00:06:59.736 } 00:06:59.736 ]' 00:06:59.736 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:59.736 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:59.736 23:22:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:59.736 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.736 23:22:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.736 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.736 23:22:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:59.736 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.736 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.995 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.995 23:22:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:59.995 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.995 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.995 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.995 23:22:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:59.995 23:22:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:59.995 23:22:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:59.995 00:06:59.995 real 0m0.351s 00:06:59.995 user 0m0.230s 00:06:59.995 sys 0m0.033s 00:06:59.995 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.995 ************************************ 00:06:59.995 END TEST rpc_integrity 00:06:59.995 ************************************ 00:06:59.995 23:22:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.995 23:22:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:59.995 23:22:23 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.995 23:22:23 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.995 23:22:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.995 ************************************ 00:06:59.995 START TEST rpc_plugins 00:06:59.995 ************************************ 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:59.995 { 00:06:59.995 "name": "Malloc1", 00:06:59.995 "aliases": [ 00:06:59.995 "ba94b563-10bd-49d6-a4e0-0f49ed200fbd" 00:06:59.995 ], 00:06:59.995 "product_name": "Malloc disk", 00:06:59.995 "block_size": 4096, 00:06:59.995 "num_blocks": 256, 00:06:59.995 "uuid": "ba94b563-10bd-49d6-a4e0-0f49ed200fbd", 00:06:59.995 "assigned_rate_limits": { 00:06:59.995 "rw_ios_per_sec": 0, 00:06:59.995 "rw_mbytes_per_sec": 0, 00:06:59.995 "r_mbytes_per_sec": 0, 00:06:59.995 "w_mbytes_per_sec": 0 00:06:59.995 }, 00:06:59.995 "claimed": false, 00:06:59.995 "zoned": false, 00:06:59.995 "supported_io_types": { 00:06:59.995 "read": true, 00:06:59.995 "write": true, 00:06:59.995 "unmap": true, 00:06:59.995 "write_zeroes": true, 00:06:59.995 "flush": true, 00:06:59.995 "reset": true, 00:06:59.995 "compare": false, 00:06:59.995 "compare_and_write": false, 00:06:59.995 "abort": true, 00:06:59.995 "nvme_admin": false, 00:06:59.995 "nvme_io": false 00:06:59.995 }, 00:06:59.995 "memory_domains": [ 00:06:59.995 { 00:06:59.995 "dma_device_id": "system", 00:06:59.995 "dma_device_type": 1 00:06:59.995 }, 00:06:59.995 { 00:06:59.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.995 "dma_device_type": 2 00:06:59.995 } 00:06:59.995 ], 00:06:59.995 "driver_specific": {} 00:06:59.995 } 00:06:59.995 ]' 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:59.995 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:59.995 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:00.254 ************************************ 00:07:00.254 END TEST rpc_plugins 00:07:00.254 ************************************ 00:07:00.254 23:22:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:00.254 00:07:00.254 real 0m0.169s 00:07:00.254 user 0m0.119s 00:07:00.254 sys 0m0.016s 00:07:00.254 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.254 23:22:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:00.254 23:22:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:00.254 23:22:23 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.254 23:22:23 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.254 23:22:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.254 ************************************ 00:07:00.254 START TEST rpc_trace_cmd_test 00:07:00.254 ************************************ 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:00.254 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45958", 00:07:00.254 "tpoint_group_mask": "0x8", 00:07:00.254 "iscsi_conn": { 00:07:00.254 "mask": "0x2", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "scsi": { 00:07:00.254 "mask": "0x4", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "bdev": { 00:07:00.254 "mask": "0x8", 00:07:00.254 "tpoint_mask": "0xffffffffffffffff" 00:07:00.254 }, 00:07:00.254 "nvmf_rdma": { 00:07:00.254 "mask": "0x10", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "nvmf_tcp": { 00:07:00.254 "mask": "0x20", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "ftl": { 00:07:00.254 "mask": "0x40", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "blobfs": { 00:07:00.254 "mask": "0x80", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "dsa": { 00:07:00.254 "mask": "0x200", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "thread": { 00:07:00.254 "mask": "0x400", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "nvme_pcie": { 00:07:00.254 "mask": "0x800", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "iaa": { 00:07:00.254 "mask": "0x1000", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "nvme_tcp": { 00:07:00.254 "mask": "0x2000", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "bdev_nvme": { 00:07:00.254 "mask": "0x4000", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 }, 00:07:00.254 "sock": { 00:07:00.254 "mask": "0x8000", 00:07:00.254 "tpoint_mask": "0x0" 00:07:00.254 } 00:07:00.254 }' 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:00.254 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:00.512 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:00.512 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:00.512 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:00.512 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:00.512 23:22:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:00.512 00:07:00.512 real 0m0.301s 00:07:00.512 user 0m0.271s 00:07:00.512 sys 0m0.024s 00:07:00.512 23:22:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.512 ************************************ 00:07:00.512 END TEST rpc_trace_cmd_test 00:07:00.512 ************************************ 00:07:00.512 23:22:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.512 23:22:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:00.512 23:22:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:00.512 23:22:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:00.512 23:22:23 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.512 23:22:23 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.512 23:22:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.512 ************************************ 00:07:00.512 START TEST rpc_daemon_integrity 00:07:00.512 ************************************ 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.512 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:00.771 { 00:07:00.771 "name": "Malloc2", 00:07:00.771 "aliases": [ 00:07:00.771 "deda9071-8018-4ac7-9139-4ac58a4e0012" 00:07:00.771 ], 00:07:00.771 "product_name": "Malloc disk", 00:07:00.771 "block_size": 512, 00:07:00.771 "num_blocks": 16384, 00:07:00.771 "uuid": "deda9071-8018-4ac7-9139-4ac58a4e0012", 00:07:00.771 "assigned_rate_limits": { 00:07:00.771 "rw_ios_per_sec": 0, 00:07:00.771 "rw_mbytes_per_sec": 0, 00:07:00.771 "r_mbytes_per_sec": 0, 00:07:00.771 "w_mbytes_per_sec": 0 00:07:00.771 }, 00:07:00.771 "claimed": false, 00:07:00.771 "zoned": false, 00:07:00.771 "supported_io_types": { 00:07:00.771 "read": true, 00:07:00.771 "write": true, 00:07:00.771 "unmap": true, 00:07:00.771 "write_zeroes": true, 00:07:00.771 "flush": true, 00:07:00.771 "reset": true, 00:07:00.771 "compare": false, 00:07:00.771 "compare_and_write": false, 00:07:00.771 "abort": true, 00:07:00.771 "nvme_admin": false, 00:07:00.771 "nvme_io": false 00:07:00.771 }, 00:07:00.771 "memory_domains": [ 00:07:00.771 { 00:07:00.771 "dma_device_id": "system", 00:07:00.771 "dma_device_type": 1 00:07:00.771 }, 00:07:00.771 { 00:07:00.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.771 "dma_device_type": 2 00:07:00.771 } 00:07:00.771 ], 00:07:00.771 "driver_specific": {} 00:07:00.771 } 00:07:00.771 ]' 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 [2024-05-14 23:22:23.893393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:00.771 [2024-05-14 23:22:23.893479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.771 [2024-05-14 23:22:23.893551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:07:00.771 [2024-05-14 23:22:23.893582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.771 [2024-05-14 23:22:23.895459] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.771 [2024-05-14 23:22:23.895505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:00.771 Passthru0 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:00.771 { 00:07:00.771 "name": "Malloc2", 00:07:00.771 "aliases": [ 00:07:00.771 "deda9071-8018-4ac7-9139-4ac58a4e0012" 00:07:00.771 ], 00:07:00.771 "product_name": "Malloc disk", 00:07:00.771 "block_size": 512, 00:07:00.771 "num_blocks": 16384, 00:07:00.771 "uuid": "deda9071-8018-4ac7-9139-4ac58a4e0012", 00:07:00.771 "assigned_rate_limits": { 00:07:00.771 "rw_ios_per_sec": 0, 00:07:00.771 "rw_mbytes_per_sec": 0, 00:07:00.771 "r_mbytes_per_sec": 0, 00:07:00.771 "w_mbytes_per_sec": 0 00:07:00.771 }, 00:07:00.771 "claimed": true, 00:07:00.771 "claim_type": "exclusive_write", 00:07:00.771 "zoned": false, 00:07:00.771 "supported_io_types": { 00:07:00.771 "read": true, 00:07:00.771 "write": true, 00:07:00.771 "unmap": true, 00:07:00.771 "write_zeroes": true, 00:07:00.771 "flush": true, 00:07:00.771 "reset": true, 00:07:00.771 "compare": false, 00:07:00.771 "compare_and_write": false, 00:07:00.771 "abort": true, 00:07:00.771 "nvme_admin": false, 00:07:00.771 "nvme_io": false 00:07:00.771 }, 00:07:00.771 "memory_domains": [ 00:07:00.771 { 00:07:00.771 "dma_device_id": "system", 00:07:00.771 "dma_device_type": 1 00:07:00.771 }, 00:07:00.771 { 00:07:00.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.771 "dma_device_type": 2 00:07:00.771 } 00:07:00.771 ], 00:07:00.771 "driver_specific": {} 00:07:00.771 }, 00:07:00.771 { 00:07:00.771 "name": "Passthru0", 00:07:00.771 "aliases": [ 00:07:00.771 "35170bc4-79f7-520d-9e5d-42af44ad7671" 00:07:00.771 ], 00:07:00.771 "product_name": "passthru", 00:07:00.771 "block_size": 512, 00:07:00.771 "num_blocks": 16384, 00:07:00.771 "uuid": "35170bc4-79f7-520d-9e5d-42af44ad7671", 00:07:00.771 "assigned_rate_limits": { 00:07:00.771 "rw_ios_per_sec": 0, 00:07:00.771 "rw_mbytes_per_sec": 0, 00:07:00.771 "r_mbytes_per_sec": 0, 00:07:00.771 "w_mbytes_per_sec": 0 00:07:00.771 }, 00:07:00.771 "claimed": false, 00:07:00.771 "zoned": false, 00:07:00.771 "supported_io_types": { 00:07:00.771 "read": true, 00:07:00.771 "write": true, 00:07:00.771 "unmap": true, 00:07:00.771 "write_zeroes": true, 00:07:00.771 "flush": true, 00:07:00.771 "reset": true, 00:07:00.771 "compare": false, 00:07:00.771 "compare_and_write": false, 00:07:00.771 "abort": true, 00:07:00.771 "nvme_admin": false, 00:07:00.771 "nvme_io": false 00:07:00.771 }, 00:07:00.771 "memory_domains": [ 00:07:00.771 { 00:07:00.771 "dma_device_id": "system", 00:07:00.771 "dma_device_type": 1 00:07:00.771 }, 00:07:00.771 { 00:07:00.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.771 "dma_device_type": 2 00:07:00.771 } 00:07:00.771 ], 00:07:00.771 "driver_specific": { 00:07:00.771 "passthru": { 00:07:00.771 "name": "Passthru0", 00:07:00.771 "base_bdev_name": "Malloc2" 00:07:00.771 } 00:07:00.771 } 00:07:00.771 } 00:07:00.771 ]' 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.771 23:22:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 23:22:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.771 23:22:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:00.771 23:22:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.771 23:22:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 23:22:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.771 23:22:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:00.771 23:22:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:01.029 23:22:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:01.029 ************************************ 00:07:01.030 END TEST rpc_daemon_integrity 00:07:01.030 ************************************ 00:07:01.030 00:07:01.030 real 0m0.367s 00:07:01.030 user 0m0.242s 00:07:01.030 sys 0m0.033s 00:07:01.030 23:22:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.030 23:22:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:01.030 23:22:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:01.030 23:22:24 rpc -- rpc/rpc.sh@84 -- # killprocess 45958 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@946 -- # '[' -z 45958 ']' 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@950 -- # kill -0 45958 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@951 -- # uname 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 45958 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:01.030 killing process with pid 45958 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 45958' 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@965 -- # kill 45958 00:07:01.030 23:22:24 rpc -- common/autotest_common.sh@970 -- # wait 45958 00:07:03.561 ************************************ 00:07:03.561 END TEST rpc 00:07:03.561 ************************************ 00:07:03.561 00:07:03.561 real 0m4.899s 00:07:03.561 user 0m5.559s 00:07:03.561 sys 0m0.738s 00:07:03.561 23:22:26 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.561 23:22:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.561 23:22:26 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:03.561 23:22:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.561 23:22:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.561 23:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.561 ************************************ 00:07:03.561 START TEST skip_rpc 00:07:03.561 ************************************ 00:07:03.561 23:22:26 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:03.561 * Looking for test storage... 00:07:03.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:03.561 23:22:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:03.561 23:22:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:03.561 23:22:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:03.562 23:22:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.562 23:22:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.562 23:22:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.562 ************************************ 00:07:03.562 START TEST skip_rpc 00:07:03.562 ************************************ 00:07:03.562 23:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:07:03.562 23:22:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=46224 00:07:03.562 23:22:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:03.562 23:22:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.562 23:22:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:03.562 [2024-05-14 23:22:26.535067] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:03.562 [2024-05-14 23:22:26.535340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46224 ] 00:07:03.562 [2024-05-14 23:22:26.683802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.820 [2024-05-14 23:22:26.894065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 46224 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 46224 ']' 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 46224 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46224 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.092 killing process with pid 46224 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46224' 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 46224 00:07:09.092 23:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 46224 00:07:10.484 00:07:10.484 real 0m7.169s 00:07:10.484 user 0m6.602s 00:07:10.484 sys 0m0.378s 00:07:10.484 23:22:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.484 ************************************ 00:07:10.484 END TEST skip_rpc 00:07:10.484 ************************************ 00:07:10.484 23:22:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.484 23:22:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:10.484 23:22:33 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.484 23:22:33 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.484 23:22:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.484 ************************************ 00:07:10.484 START TEST skip_rpc_with_json 00:07:10.484 ************************************ 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=46335 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 46335 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 46335 ']' 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.484 23:22:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:10.484 [2024-05-14 23:22:33.741524] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:10.484 [2024-05-14 23:22:33.741713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46335 ] 00:07:10.742 [2024-05-14 23:22:33.898410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.998 [2024-05-14 23:22:34.112129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.934 [2024-05-14 23:22:34.981572] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:11.934 request: 00:07:11.934 { 00:07:11.934 "trtype": "tcp", 00:07:11.934 "method": "nvmf_get_transports", 00:07:11.934 "req_id": 1 00:07:11.934 } 00:07:11.934 Got JSON-RPC error response 00:07:11.934 response: 00:07:11.934 { 00:07:11.934 "code": -19, 00:07:11.934 "message": "No such device" 00:07:11.934 } 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.934 [2024-05-14 23:22:34.989618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.934 23:22:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.934 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.934 23:22:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:11.934 { 00:07:11.934 "subsystems": [ 00:07:11.934 { 00:07:11.934 "subsystem": "scheduler", 00:07:11.934 "config": [ 00:07:11.934 { 00:07:11.934 "method": "framework_set_scheduler", 00:07:11.934 "params": { 00:07:11.934 "name": "static" 00:07:11.934 } 00:07:11.934 } 00:07:11.934 ] 00:07:11.934 }, 00:07:11.934 { 00:07:11.934 "subsystem": "vmd", 00:07:11.934 "config": [] 00:07:11.934 }, 00:07:11.934 { 00:07:11.934 "subsystem": "sock", 00:07:11.934 "config": [ 00:07:11.934 { 00:07:11.934 "method": "sock_impl_set_options", 00:07:11.934 "params": { 00:07:11.934 "impl_name": "posix", 00:07:11.934 "recv_buf_size": 2097152, 00:07:11.934 "send_buf_size": 2097152, 00:07:11.934 "enable_recv_pipe": true, 00:07:11.934 "enable_quickack": false, 00:07:11.934 "enable_placement_id": 0, 00:07:11.934 "enable_zerocopy_send_server": true, 00:07:11.934 "enable_zerocopy_send_client": false, 00:07:11.934 "zerocopy_threshold": 0, 00:07:11.934 "tls_version": 0, 00:07:11.934 "enable_ktls": false 00:07:11.934 } 00:07:11.934 }, 00:07:11.934 { 00:07:11.934 "method": "sock_impl_set_options", 00:07:11.934 "params": { 00:07:11.934 "impl_name": "ssl", 00:07:11.934 "recv_buf_size": 4096, 00:07:11.934 "send_buf_size": 4096, 00:07:11.934 "enable_recv_pipe": true, 00:07:11.934 "enable_quickack": false, 00:07:11.934 "enable_placement_id": 0, 00:07:11.934 "enable_zerocopy_send_server": true, 00:07:11.934 "enable_zerocopy_send_client": false, 00:07:11.934 "zerocopy_threshold": 0, 00:07:11.934 "tls_version": 0, 00:07:11.934 "enable_ktls": false 00:07:11.934 } 00:07:11.934 } 00:07:11.934 ] 00:07:11.934 }, 00:07:11.934 { 00:07:11.934 "subsystem": "iobuf", 00:07:11.934 "config": [ 00:07:11.934 { 00:07:11.934 "method": "iobuf_set_options", 00:07:11.934 "params": { 00:07:11.934 "small_pool_count": 8192, 00:07:11.934 "large_pool_count": 1024, 00:07:11.934 "small_bufsize": 8192, 00:07:11.934 "large_bufsize": 135168 00:07:11.934 } 00:07:11.934 } 00:07:11.934 ] 00:07:11.934 }, 00:07:11.934 { 00:07:11.934 "subsystem": "keyring", 00:07:11.934 "config": [] 00:07:11.934 }, 00:07:11.934 { 00:07:11.934 "subsystem": "accel", 00:07:11.934 "config": [ 00:07:11.934 { 00:07:11.934 "method": "accel_set_options", 00:07:11.934 "params": { 00:07:11.934 "small_cache_size": 128, 00:07:11.934 "large_cache_size": 16, 00:07:11.934 "task_count": 2048, 00:07:11.934 "sequence_count": 2048, 00:07:11.934 "buf_count": 2048 00:07:11.934 } 00:07:11.934 } 00:07:11.934 ] 00:07:11.934 }, 00:07:11.934 { 00:07:11.934 "subsystem": "bdev", 00:07:11.934 "config": [ 00:07:11.934 { 00:07:11.934 "method": "bdev_set_options", 00:07:11.934 "params": { 00:07:11.934 "bdev_io_pool_size": 65535, 00:07:11.934 "bdev_io_cache_size": 256, 00:07:11.935 "bdev_auto_examine": true, 00:07:11.935 "iobuf_small_cache_size": 128, 00:07:11.935 "iobuf_large_cache_size": 16 00:07:11.935 } 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "method": "bdev_raid_set_options", 00:07:11.935 "params": { 00:07:11.935 "process_window_size_kb": 1024 00:07:11.935 } 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "method": "bdev_nvme_set_options", 00:07:11.935 "params": { 00:07:11.935 "action_on_timeout": "none", 00:07:11.935 "timeout_us": 0, 00:07:11.935 "timeout_admin_us": 0, 00:07:11.935 "keep_alive_timeout_ms": 10000, 00:07:11.935 "arbitration_burst": 0, 00:07:11.935 "low_priority_weight": 0, 00:07:11.935 "medium_priority_weight": 0, 00:07:11.935 "high_priority_weight": 0, 00:07:11.935 "nvme_adminq_poll_period_us": 10000, 00:07:11.935 "nvme_ioq_poll_period_us": 0, 00:07:11.935 "io_queue_requests": 0, 00:07:11.935 "delay_cmd_submit": true, 00:07:11.935 "transport_retry_count": 4, 00:07:11.935 "bdev_retry_count": 3, 00:07:11.935 "transport_ack_timeout": 0, 00:07:11.935 "ctrlr_loss_timeout_sec": 0, 00:07:11.935 "reconnect_delay_sec": 0, 00:07:11.935 "fast_io_fail_timeout_sec": 0, 00:07:11.935 "disable_auto_failback": false, 00:07:11.935 "generate_uuids": false, 00:07:11.935 "transport_tos": 0, 00:07:11.935 "nvme_error_stat": false, 00:07:11.935 "rdma_srq_size": 0, 00:07:11.935 "io_path_stat": false, 00:07:11.935 "allow_accel_sequence": false, 00:07:11.935 "rdma_max_cq_size": 0, 00:07:11.935 "rdma_cm_event_timeout_ms": 0, 00:07:11.935 "dhchap_digests": [ 00:07:11.935 "sha256", 00:07:11.935 "sha384", 00:07:11.935 "sha512" 00:07:11.935 ], 00:07:11.935 "dhchap_dhgroups": [ 00:07:11.935 "null", 00:07:11.935 "ffdhe2048", 00:07:11.935 "ffdhe3072", 00:07:11.935 "ffdhe4096", 00:07:11.935 "ffdhe6144", 00:07:11.935 "ffdhe8192" 00:07:11.935 ] 00:07:11.935 } 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "method": "bdev_nvme_set_hotplug", 00:07:11.935 "params": { 00:07:11.935 "period_us": 100000, 00:07:11.935 "enable": false 00:07:11.935 } 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "method": "bdev_wait_for_examine" 00:07:11.935 } 00:07:11.935 ] 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "subsystem": "nvmf", 00:07:11.935 "config": [ 00:07:11.935 { 00:07:11.935 "method": "nvmf_set_config", 00:07:11.935 "params": { 00:07:11.935 "discovery_filter": "match_any", 00:07:11.935 "admin_cmd_passthru": { 00:07:11.935 "identify_ctrlr": false 00:07:11.935 } 00:07:11.935 } 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "method": "nvmf_set_max_subsystems", 00:07:11.935 "params": { 00:07:11.935 "max_subsystems": 1024 00:07:11.935 } 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "method": "nvmf_set_crdt", 00:07:11.935 "params": { 00:07:11.935 "crdt1": 0, 00:07:11.935 "crdt2": 0, 00:07:11.935 "crdt3": 0 00:07:11.935 } 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "method": "nvmf_create_transport", 00:07:11.935 "params": { 00:07:11.935 "trtype": "TCP", 00:07:11.935 "max_queue_depth": 128, 00:07:11.935 "max_io_qpairs_per_ctrlr": 127, 00:07:11.935 "in_capsule_data_size": 4096, 00:07:11.935 "max_io_size": 131072, 00:07:11.935 "io_unit_size": 131072, 00:07:11.935 "max_aq_depth": 128, 00:07:11.935 "num_shared_buffers": 511, 00:07:11.935 "buf_cache_size": 4294967295, 00:07:11.935 "dif_insert_or_strip": false, 00:07:11.935 "zcopy": false, 00:07:11.935 "c2h_success": true, 00:07:11.935 "sock_priority": 0, 00:07:11.935 "abort_timeout_sec": 1, 00:07:11.935 "ack_timeout": 0, 00:07:11.935 "data_wr_pool_size": 0 00:07:11.935 } 00:07:11.935 } 00:07:11.935 ] 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "subsystem": "nbd", 00:07:11.935 "config": [] 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "subsystem": "vhost_blk", 00:07:11.935 "config": [] 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "subsystem": "scsi", 00:07:11.935 "config": null 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "subsystem": "iscsi", 00:07:11.935 "config": [ 00:07:11.935 { 00:07:11.935 "method": "iscsi_set_options", 00:07:11.935 "params": { 00:07:11.935 "node_base": "iqn.2016-06.io.spdk", 00:07:11.935 "max_sessions": 128, 00:07:11.935 "max_connections_per_session": 2, 00:07:11.935 "max_queue_depth": 64, 00:07:11.935 "default_time2wait": 2, 00:07:11.935 "default_time2retain": 20, 00:07:11.935 "first_burst_length": 8192, 00:07:11.935 "immediate_data": true, 00:07:11.935 "allow_duplicated_isid": false, 00:07:11.935 "error_recovery_level": 0, 00:07:11.935 "nop_timeout": 60, 00:07:11.935 "nop_in_interval": 30, 00:07:11.935 "disable_chap": false, 00:07:11.935 "require_chap": false, 00:07:11.935 "mutual_chap": false, 00:07:11.935 "chap_group": 0, 00:07:11.935 "max_large_datain_per_connection": 64, 00:07:11.935 "max_r2t_per_connection": 4, 00:07:11.935 "pdu_pool_size": 36864, 00:07:11.935 "immediate_data_pool_size": 16384, 00:07:11.935 "data_out_pool_size": 2048 00:07:11.935 } 00:07:11.935 } 00:07:11.935 ] 00:07:11.935 }, 00:07:11.935 { 00:07:11.935 "subsystem": "vhost_scsi", 00:07:11.935 "config": [] 00:07:11.935 } 00:07:11.935 ] 00:07:11.935 } 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 46335 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46335 ']' 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46335 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46335 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:11.935 killing process with pid 46335 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46335' 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46335 00:07:11.935 23:22:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46335 00:07:14.465 23:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=46401 00:07:14.465 23:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:14.466 23:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 46401 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46401 ']' 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46401 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46401 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.781 killing process with pid 46401 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46401' 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46401 00:07:19.781 23:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46401 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:21.685 00:07:21.685 real 0m10.985s 00:07:21.685 user 0m10.323s 00:07:21.685 sys 0m0.834s 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.685 ************************************ 00:07:21.685 END TEST skip_rpc_with_json 00:07:21.685 ************************************ 00:07:21.685 23:22:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:21.685 23:22:44 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.685 23:22:44 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.685 23:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.685 ************************************ 00:07:21.685 START TEST skip_rpc_with_delay 00:07:21.685 ************************************ 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.685 [2024-05-14 23:22:44.783438] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:21.685 [2024-05-14 23:22:44.783687] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.685 00:07:21.685 real 0m0.174s 00:07:21.685 user 0m0.042s 00:07:21.685 sys 0m0.036s 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.685 ************************************ 00:07:21.685 END TEST skip_rpc_with_delay 00:07:21.685 ************************************ 00:07:21.685 23:22:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:21.685 23:22:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:21.685 23:22:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:21.685 23:22:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:21.685 23:22:44 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:21.685 23:22:44 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.685 23:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.685 ************************************ 00:07:21.685 START TEST exit_on_failed_rpc_init 00:07:21.685 ************************************ 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=46542 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 46542 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 46542 ']' 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.685 23:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:21.943 [2024-05-14 23:22:45.004743] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:21.943 [2024-05-14 23:22:45.004958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46542 ] 00:07:21.943 [2024-05-14 23:22:45.154762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.201 [2024-05-14 23:22:45.356622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.135 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.136 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:23.136 23:22:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:23.136 [2024-05-14 23:22:46.296078] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:23.136 [2024-05-14 23:22:46.296303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46567 ] 00:07:23.393 [2024-05-14 23:22:46.467339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.650 [2024-05-14 23:22:46.691190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.650 [2024-05-14 23:22:46.691325] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:23.650 [2024-05-14 23:22:46.691358] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:23.650 [2024-05-14 23:22:46.691385] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 46542 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 46542 ']' 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 46542 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 46542 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:23.908 killing process with pid 46542 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:23.908 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46542' 00:07:23.909 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 46542 00:07:23.909 23:22:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 46542 00:07:25.809 00:07:25.809 real 0m4.226s 00:07:25.809 user 0m4.639s 00:07:25.809 sys 0m0.566s 00:07:25.809 ************************************ 00:07:25.809 END TEST exit_on_failed_rpc_init 00:07:25.809 ************************************ 00:07:25.809 23:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.809 23:22:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:26.067 23:22:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:26.067 00:07:26.067 real 0m22.833s 00:07:26.067 user 0m21.705s 00:07:26.067 sys 0m1.974s 00:07:26.067 23:22:49 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.067 23:22:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.067 ************************************ 00:07:26.067 END TEST skip_rpc 00:07:26.067 ************************************ 00:07:26.067 23:22:49 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:26.067 23:22:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:26.067 23:22:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.067 23:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.067 ************************************ 00:07:26.067 START TEST rpc_client 00:07:26.067 ************************************ 00:07:26.067 23:22:49 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:26.067 * Looking for test storage... 00:07:26.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:26.067 23:22:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:26.325 OK 00:07:26.325 23:22:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:26.325 00:07:26.325 real 0m0.229s 00:07:26.325 user 0m0.074s 00:07:26.325 sys 0m0.068s 00:07:26.325 23:22:49 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.325 23:22:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:26.325 ************************************ 00:07:26.325 END TEST rpc_client 00:07:26.325 ************************************ 00:07:26.325 23:22:49 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:26.325 23:22:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:26.325 23:22:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.325 23:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.325 ************************************ 00:07:26.325 START TEST json_config 00:07:26.325 ************************************ 00:07:26.325 23:22:49 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93d9aa8c-66ee-41ab-956d-26d2c3a6ae68 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=93d9aa8c-66ee-41ab-956d-26d2c3a6ae68 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.325 23:22:49 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.325 23:22:49 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.325 23:22:49 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.325 23:22:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:26.325 23:22:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:26.325 23:22:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:26.325 23:22:49 json_config -- paths/export.sh@5 -- # export PATH 00:07:26.325 23:22:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@47 -- # : 0 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.325 23:22:49 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@31 -- # app_pid=([target]="" [initiator]="") 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@32 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@33 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@34 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:26.325 INFO: JSON configuration test init 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:26.325 23:22:49 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:07:26.326 23:22:49 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:07:26.326 23:22:49 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.326 23:22:49 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.326 23:22:49 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:07:26.326 23:22:49 json_config -- json_config/common.sh@9 -- # local app=target 00:07:26.326 23:22:49 json_config -- json_config/common.sh@10 -- # shift 00:07:26.326 23:22:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:26.326 23:22:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:26.326 23:22:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:26.326 23:22:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.326 23:22:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.326 Waiting for target to run... 00:07:26.326 23:22:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46751 00:07:26.326 23:22:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:26.326 23:22:49 json_config -- json_config/common.sh@25 -- # waitforlisten 46751 /var/tmp/spdk_tgt.sock 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@827 -- # '[' -z 46751 ']' 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.326 23:22:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.326 23:22:49 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:26.586 [2024-05-14 23:22:49.679140] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:26.586 [2024-05-14 23:22:49.679373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46751 ] 00:07:26.849 [2024-05-14 23:22:50.100515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.107 [2024-05-14 23:22:50.289136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.366 23:22:50 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.366 23:22:50 json_config -- common/autotest_common.sh@860 -- # return 0 00:07:27.366 23:22:50 json_config -- json_config/common.sh@26 -- # echo '' 00:07:27.366 00:07:27.366 23:22:50 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:07:27.366 23:22:50 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:07:27.366 23:22:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:27.366 23:22:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.366 23:22:50 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:07:27.366 23:22:50 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:07:27.366 23:22:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.366 23:22:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.366 23:22:50 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:27.366 23:22:50 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:07:27.366 23:22:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:28.299 23:22:51 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:07:28.299 23:22:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:28.299 23:22:51 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:28.299 23:22:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.299 23:22:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:28.299 23:22:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=("bdev_register" "bdev_unregister") 00:07:28.299 23:22:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:28.299 23:22:51 json_config -- json_config/json_config.sh@48 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:07:28.299 23:22:51 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:28.299 23:22:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:28.299 23:22:51 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:07:28.575 23:22:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.575 23:22:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@55 -- # return 0 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:07:28.575 23:22:51 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:28.575 23:22:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:07:28.575 23:22:51 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:28.575 23:22:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:28.834 23:22:51 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:07:28.834 23:22:51 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:28.834 23:22:51 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:28.834 23:22:51 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:07:28.834 23:22:51 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:07:28.834 23:22:51 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:28.834 23:22:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:29.092 Nvme0n1p0 Nvme0n1p1 00:07:29.092 23:22:52 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:29.092 23:22:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:29.350 [2024-05-14 23:22:52.417670] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:29.350 [2024-05-14 23:22:52.417812] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:29.350 00:07:29.350 23:22:52 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:29.350 23:22:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:29.608 Malloc3 00:07:29.608 23:22:52 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:29.608 23:22:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:29.608 [2024-05-14 23:22:52.874456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:29.608 [2024-05-14 23:22:52.874600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.608 [2024-05-14 23:22:52.874654] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036080 00:07:29.608 [2024-05-14 23:22:52.874678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.608 [2024-05-14 23:22:52.876437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.608 [2024-05-14 23:22:52.876487] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:29.608 PTBdevFromMalloc3 00:07:29.608 23:22:52 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:29.608 23:22:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:29.866 Null0 00:07:29.866 23:22:53 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:29.866 23:22:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:30.130 Malloc0 00:07:30.130 23:22:53 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:30.130 23:22:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:30.405 Malloc1 00:07:30.405 23:22:53 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:30.405 23:22:53 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:30.661 102400+0 records in 00:07:30.661 102400+0 records out 00:07:30.661 104857600 bytes (105 MB) copied, 0.34253 s, 306 MB/s 00:07:30.661 23:22:53 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:30.661 23:22:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:30.918 aio_disk 00:07:30.918 23:22:54 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:30.918 23:22:54 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:30.918 23:22:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:31.177 fb931eda-990a-425d-8d7c-a10a525b64b6 00:07:31.177 23:22:54 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:31.177 23:22:54 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:31.177 23:22:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:31.437 23:22:54 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:31.438 23:22:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:31.695 23:22:54 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:31.695 23:22:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:31.953 23:22:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e413c924-0c75-4773-b71f-5654cfcfaaac bdev_register:6872fc29-8f5e-4700-8be9-447a98b0458d bdev_register:d01d0152-0932-43b4-be9b-8151250847e6 bdev_register:c80d01f4-3a3e-4141-920c-07d2b7766459 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e413c924-0c75-4773-b71f-5654cfcfaaac bdev_register:6872fc29-8f5e-4700-8be9-447a98b0458d bdev_register:d01d0152-0932-43b4-be9b-8151250847e6 bdev_register:c80d01f4-3a3e-4141-920c-07d2b7766459 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@71 -- # sort 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@72 -- # sort 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:07:31.953 23:22:55 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:31.953 23:22:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.212 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:e413c924-0c75-4773-b71f-5654cfcfaaac 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:6872fc29-8f5e-4700-8be9-447a98b0458d 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:d01d0152-0932-43b4-be9b-8151250847e6 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:c80d01f4-3a3e-4141-920c-07d2b7766459 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:6872fc29-8f5e-4700-8be9-447a98b0458d bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:c80d01f4-3a3e-4141-920c-07d2b7766459 bdev_register:d01d0152-0932-43b4-be9b-8151250847e6 bdev_register:e413c924-0c75-4773-b71f-5654cfcfaaac != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\8\7\2\f\c\2\9\-\8\f\5\e\-\4\7\0\0\-\8\b\e\9\-\4\4\7\a\9\8\b\0\4\5\8\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\8\0\d\0\1\f\4\-\3\a\3\e\-\4\1\4\1\-\9\2\0\c\-\0\7\d\2\b\7\7\6\6\4\5\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\0\1\d\0\1\5\2\-\0\9\3\2\-\4\3\b\4\-\b\e\9\b\-\8\1\5\1\2\5\0\8\4\7\e\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\4\1\3\c\9\2\4\-\0\c\7\5\-\4\7\7\3\-\b\7\1\f\-\5\6\5\4\c\f\c\f\a\a\a\c ]] 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@86 -- # cat 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:6872fc29-8f5e-4700-8be9-447a98b0458d bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:c80d01f4-3a3e-4141-920c-07d2b7766459 bdev_register:d01d0152-0932-43b4-be9b-8151250847e6 bdev_register:e413c924-0c75-4773-b71f-5654cfcfaaac 00:07:32.213 Expected events matched: 00:07:32.213 bdev_register:6872fc29-8f5e-4700-8be9-447a98b0458d 00:07:32.213 bdev_register:Malloc0 00:07:32.213 bdev_register:Malloc0p0 00:07:32.213 bdev_register:Malloc0p1 00:07:32.213 bdev_register:Malloc0p2 00:07:32.213 bdev_register:Malloc1 00:07:32.213 bdev_register:Malloc3 00:07:32.213 bdev_register:Null0 00:07:32.213 bdev_register:Nvme0n1 00:07:32.213 bdev_register:Nvme0n1p0 00:07:32.213 bdev_register:Nvme0n1p1 00:07:32.213 bdev_register:PTBdevFromMalloc3 00:07:32.213 bdev_register:aio_disk 00:07:32.213 bdev_register:c80d01f4-3a3e-4141-920c-07d2b7766459 00:07:32.213 bdev_register:d01d0152-0932-43b4-be9b-8151250847e6 00:07:32.213 bdev_register:e413c924-0c75-4773-b71f-5654cfcfaaac 00:07:32.213 23:22:55 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:07:32.213 23:22:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.213 23:22:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.472 23:22:55 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:32.472 23:22:55 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:32.472 23:22:55 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:32.472 23:22:55 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:07:32.472 23:22:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.472 23:22:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.472 23:22:55 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:07:32.472 23:22:55 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:32.472 23:22:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:32.472 MallocBdevForConfigChangeCheck 00:07:32.472 23:22:55 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:07:32.472 23:22:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.472 23:22:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.472 23:22:55 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:07:32.472 23:22:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.038 INFO: shutting down applications... 00:07:33.038 23:22:56 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:07:33.038 23:22:56 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:07:33.038 23:22:56 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:07:33.038 23:22:56 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:07:33.038 23:22:56 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:33.038 [2024-05-14 23:22:56.247983] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:33.297 Calling clear_vhost_scsi_subsystem 00:07:33.297 Calling clear_iscsi_subsystem 00:07:33.297 Calling clear_vhost_blk_subsystem 00:07:33.297 Calling clear_nbd_subsystem 00:07:33.297 Calling clear_nvmf_subsystem 00:07:33.297 Calling clear_bdev_subsystem 00:07:33.297 23:22:56 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:33.297 23:22:56 json_config -- json_config/json_config.sh@343 -- # count=100 00:07:33.297 23:22:56 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:07:33.297 23:22:56 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:33.297 23:22:56 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.297 23:22:56 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:33.555 23:22:56 json_config -- json_config/json_config.sh@345 -- # break 00:07:33.555 23:22:56 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:07:33.555 23:22:56 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:07:33.555 23:22:56 json_config -- json_config/common.sh@31 -- # local app=target 00:07:33.555 23:22:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:33.555 23:22:56 json_config -- json_config/common.sh@35 -- # [[ -n 46751 ]] 00:07:33.555 23:22:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 46751 00:07:33.555 23:22:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:33.555 23:22:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:33.555 23:22:56 json_config -- json_config/common.sh@41 -- # kill -0 46751 00:07:33.555 23:22:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:34.161 23:22:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:34.161 23:22:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:34.161 23:22:57 json_config -- json_config/common.sh@41 -- # kill -0 46751 00:07:34.161 23:22:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:34.729 23:22:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:34.729 23:22:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:34.729 23:22:57 json_config -- json_config/common.sh@41 -- # kill -0 46751 00:07:34.729 23:22:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:34.729 23:22:57 json_config -- json_config/common.sh@43 -- # break 00:07:34.729 23:22:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:34.729 23:22:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:34.729 SPDK target shutdown done 00:07:34.729 INFO: relaunching applications... 00:07:34.729 23:22:57 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:07:34.729 23:22:57 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:34.729 23:22:57 json_config -- json_config/common.sh@9 -- # local app=target 00:07:34.729 23:22:57 json_config -- json_config/common.sh@10 -- # shift 00:07:34.729 23:22:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:34.729 23:22:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:34.729 23:22:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:34.729 23:22:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:34.729 23:22:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:34.729 23:22:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=47015 00:07:34.729 23:22:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:34.729 Waiting for target to run... 00:07:34.729 23:22:57 json_config -- json_config/common.sh@25 -- # waitforlisten 47015 /var/tmp/spdk_tgt.sock 00:07:34.729 23:22:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:34.729 23:22:57 json_config -- common/autotest_common.sh@827 -- # '[' -z 47015 ']' 00:07:34.729 23:22:57 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:34.729 23:22:57 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:34.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:34.729 23:22:57 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:34.729 23:22:57 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:34.729 23:22:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.729 [2024-05-14 23:22:57.980446] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:34.729 [2024-05-14 23:22:57.980647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47015 ] 00:07:35.295 [2024-05-14 23:22:58.425171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.553 [2024-05-14 23:22:58.617784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.121 [2024-05-14 23:22:59.278724] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:36.121 [2024-05-14 23:22:59.278844] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:36.121 [2024-05-14 23:22:59.286689] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:36.121 [2024-05-14 23:22:59.286745] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:36.121 [2024-05-14 23:22:59.294727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:36.121 [2024-05-14 23:22:59.294794] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:36.121 [2024-05-14 23:22:59.294823] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:36.121 [2024-05-14 23:22:59.382864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:36.121 [2024-05-14 23:22:59.383020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.121 [2024-05-14 23:22:59.383053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038780 00:07:36.121 [2024-05-14 23:22:59.383079] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.121 [2024-05-14 23:22:59.383488] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.121 [2024-05-14 23:22:59.383545] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:36.380 23:22:59 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.380 23:22:59 json_config -- common/autotest_common.sh@860 -- # return 0 00:07:36.380 23:22:59 json_config -- json_config/common.sh@26 -- # echo '' 00:07:36.380 00:07:36.380 INFO: Checking if target configuration is the same... 00:07:36.380 23:22:59 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:07:36.380 23:22:59 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:36.380 23:22:59 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.380 23:22:59 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:07:36.380 23:22:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:36.380 + '[' 2 -ne 2 ']' 00:07:36.380 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:36.380 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:36.380 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:36.380 +++ basename /dev/fd/62 00:07:36.380 ++ mktemp /tmp/62.XXX 00:07:36.380 + tmp_file_1=/tmp/62.9To 00:07:36.380 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.380 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:36.380 + tmp_file_2=/tmp/spdk_tgt_config.json.7Sp 00:07:36.380 + ret=0 00:07:36.380 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:36.638 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:36.638 + diff -u /tmp/62.9To /tmp/spdk_tgt_config.json.7Sp 00:07:36.638 INFO: JSON config files are the same 00:07:36.638 + echo 'INFO: JSON config files are the same' 00:07:36.638 + rm /tmp/62.9To /tmp/spdk_tgt_config.json.7Sp 00:07:36.638 + exit 0 00:07:36.638 INFO: changing configuration and checking if this can be detected... 00:07:36.638 23:22:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:07:36.638 23:22:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:36.638 23:22:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:36.638 23:22:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:36.896 23:23:00 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.896 23:23:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:07:36.896 23:23:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:36.896 + '[' 2 -ne 2 ']' 00:07:36.896 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:36.896 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:36.896 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:36.896 +++ basename /dev/fd/62 00:07:36.896 ++ mktemp /tmp/62.XXX 00:07:36.896 + tmp_file_1=/tmp/62.r6K 00:07:36.896 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.896 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:36.896 + tmp_file_2=/tmp/spdk_tgt_config.json.Ko9 00:07:36.896 + ret=0 00:07:36.896 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:37.462 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:37.462 + diff -u /tmp/62.r6K /tmp/spdk_tgt_config.json.Ko9 00:07:37.462 + ret=1 00:07:37.462 + echo '=== Start of file: /tmp/62.r6K ===' 00:07:37.462 + cat /tmp/62.r6K 00:07:37.462 + echo '=== End of file: /tmp/62.r6K ===' 00:07:37.462 + echo '' 00:07:37.462 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Ko9 ===' 00:07:37.462 + cat /tmp/spdk_tgt_config.json.Ko9 00:07:37.462 + echo '=== End of file: /tmp/spdk_tgt_config.json.Ko9 ===' 00:07:37.462 + echo '' 00:07:37.462 + rm /tmp/62.r6K /tmp/spdk_tgt_config.json.Ko9 00:07:37.462 + exit 1 00:07:37.462 INFO: configuration change detected. 00:07:37.462 23:23:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:07:37.462 23:23:00 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:07:37.462 23:23:00 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:07:37.462 23:23:00 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:37.462 23:23:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.462 23:23:00 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:07:37.462 23:23:00 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:07:37.462 23:23:00 json_config -- json_config/json_config.sh@317 -- # [[ -n 47015 ]] 00:07:37.462 23:23:00 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:07:37.462 23:23:00 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:07:37.462 23:23:00 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:37.462 23:23:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.463 23:23:00 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:07:37.463 23:23:00 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:37.463 23:23:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:37.721 23:23:00 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:37.721 23:23:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:37.979 23:23:01 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:37.979 23:23:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:37.979 23:23:01 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:37.979 23:23:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:38.237 23:23:01 json_config -- json_config/json_config.sh@193 -- # uname -s 00:07:38.237 23:23:01 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:07:38.237 23:23:01 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:07:38.237 23:23:01 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:07:38.237 23:23:01 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.237 23:23:01 json_config -- json_config/json_config.sh@323 -- # killprocess 47015 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@946 -- # '[' -z 47015 ']' 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@950 -- # kill -0 47015 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@951 -- # uname 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47015 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:38.237 killing process with pid 47015 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47015' 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@965 -- # kill 47015 00:07:38.237 23:23:01 json_config -- common/autotest_common.sh@970 -- # wait 47015 00:07:39.172 23:23:02 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:39.172 23:23:02 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:07:39.172 23:23:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.172 23:23:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:39.172 INFO: Success 00:07:39.172 23:23:02 json_config -- json_config/json_config.sh@328 -- # return 0 00:07:39.172 23:23:02 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:07:39.172 00:07:39.172 real 0m12.965s 00:07:39.172 user 0m18.256s 00:07:39.172 sys 0m2.241s 00:07:39.172 23:23:02 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.172 23:23:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:39.172 ************************************ 00:07:39.172 END TEST json_config 00:07:39.172 ************************************ 00:07:39.172 23:23:02 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:39.172 23:23:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:39.172 23:23:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.172 23:23:02 -- common/autotest_common.sh@10 -- # set +x 00:07:39.430 ************************************ 00:07:39.430 START TEST json_config_extra_key 00:07:39.430 ************************************ 00:07:39.430 23:23:02 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:39.430 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c63696e5-a242-4a69-86f6-b2a479b2a2a3 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c63696e5-a242-4a69-86f6-b2a479b2a2a3 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.430 23:23:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.430 23:23:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.430 23:23:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.430 23:23:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:39.430 23:23:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:39.430 23:23:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:39.430 23:23:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:39.430 23:23:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.430 23:23:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.430 INFO: launching applications... 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=([target]="") 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=([target]='-m 0x1 -s 1024') 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:39.431 23:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:39.431 Waiting for target to run... 00:07:39.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=47209 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 47209 /var/tmp/spdk_tgt.sock 00:07:39.431 23:23:02 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 47209 ']' 00:07:39.431 23:23:02 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:39.431 23:23:02 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.431 23:23:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:39.431 23:23:02 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:39.431 23:23:02 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.431 23:23:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:39.431 [2024-05-14 23:23:02.696755] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:39.431 [2024-05-14 23:23:02.696932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47209 ] 00:07:39.997 [2024-05-14 23:23:03.139187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.255 [2024-05-14 23:23:03.316223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.821 00:07:40.821 INFO: shutting down applications... 00:07:40.821 23:23:03 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.821 23:23:03 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:40.821 23:23:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:40.821 23:23:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 47209 ]] 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 47209 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47209 00:07:40.821 23:23:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:41.388 23:23:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:41.388 23:23:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.388 23:23:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47209 00:07:41.388 23:23:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:41.958 23:23:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:41.958 23:23:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.958 23:23:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47209 00:07:41.958 23:23:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.224 23:23:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.224 23:23:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.224 23:23:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47209 00:07:42.224 23:23:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.790 23:23:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.790 23:23:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.790 23:23:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47209 00:07:42.790 23:23:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:43.357 23:23:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:43.357 23:23:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.357 23:23:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47209 00:07:43.357 SPDK target shutdown done 00:07:43.357 Success 00:07:43.357 23:23:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:43.357 23:23:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:43.357 23:23:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:43.357 23:23:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:43.357 23:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:43.357 ************************************ 00:07:43.357 END TEST json_config_extra_key 00:07:43.357 ************************************ 00:07:43.357 00:07:43.357 real 0m4.048s 00:07:43.357 user 0m3.555s 00:07:43.357 sys 0m0.552s 00:07:43.357 23:23:06 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.357 23:23:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:43.357 23:23:06 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:43.357 23:23:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:43.357 23:23:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.357 23:23:06 -- common/autotest_common.sh@10 -- # set +x 00:07:43.357 ************************************ 00:07:43.357 START TEST alias_rpc 00:07:43.357 ************************************ 00:07:43.357 23:23:06 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:43.357 * Looking for test storage... 00:07:43.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:43.357 23:23:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:43.357 23:23:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=47332 00:07:43.357 23:23:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 47332 00:07:43.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.357 23:23:06 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 47332 ']' 00:07:43.357 23:23:06 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.357 23:23:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:43.357 23:23:06 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:43.357 23:23:06 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.357 23:23:06 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:43.357 23:23:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.614 [2024-05-14 23:23:06.783304] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:43.614 [2024-05-14 23:23:06.783475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47332 ] 00:07:43.873 [2024-05-14 23:23:06.935354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.873 [2024-05-14 23:23:07.149659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.808 23:23:07 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.808 23:23:07 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:44.808 23:23:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:45.067 23:23:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 47332 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 47332 ']' 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 47332 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47332 00:07:45.067 killing process with pid 47332 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47332' 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@965 -- # kill 47332 00:07:45.067 23:23:08 alias_rpc -- common/autotest_common.sh@970 -- # wait 47332 00:07:47.598 ************************************ 00:07:47.598 END TEST alias_rpc 00:07:47.598 ************************************ 00:07:47.598 00:07:47.598 real 0m3.860s 00:07:47.598 user 0m3.800s 00:07:47.598 sys 0m0.526s 00:07:47.598 23:23:10 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.598 23:23:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.598 23:23:10 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:07:47.598 23:23:10 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:47.598 23:23:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.598 23:23:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.598 23:23:10 -- common/autotest_common.sh@10 -- # set +x 00:07:47.598 ************************************ 00:07:47.598 START TEST spdkcli_tcp 00:07:47.598 ************************************ 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:47.598 * Looking for test storage... 00:07:47.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=47446 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 47446 00:07:47.598 23:23:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 47446 ']' 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.598 23:23:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.598 [2024-05-14 23:23:10.702811] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:47.598 [2024-05-14 23:23:10.703001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47446 ] 00:07:47.598 [2024-05-14 23:23:10.854375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.856 [2024-05-14 23:23:11.052275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.856 [2024-05-14 23:23:11.052275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.791 23:23:11 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.791 23:23:11 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:07:48.791 23:23:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=47473 00:07:48.791 23:23:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:48.791 23:23:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:48.791 [ 00:07:48.791 "spdk_get_version", 00:07:48.791 "rpc_get_methods", 00:07:48.791 "keyring_get_keys", 00:07:48.791 "trace_get_info", 00:07:48.791 "trace_get_tpoint_group_mask", 00:07:48.791 "trace_disable_tpoint_group", 00:07:48.791 "trace_enable_tpoint_group", 00:07:48.791 "trace_clear_tpoint_mask", 00:07:48.791 "trace_set_tpoint_mask", 00:07:48.791 "framework_get_pci_devices", 00:07:48.791 "framework_get_config", 00:07:48.791 "framework_get_subsystems", 00:07:48.791 "iobuf_get_stats", 00:07:48.791 "iobuf_set_options", 00:07:48.791 "sock_get_default_impl", 00:07:48.791 "sock_set_default_impl", 00:07:48.791 "sock_impl_set_options", 00:07:48.791 "sock_impl_get_options", 00:07:48.791 "vmd_rescan", 00:07:48.791 "vmd_remove_device", 00:07:48.791 "vmd_enable", 00:07:48.791 "accel_get_stats", 00:07:48.791 "accel_set_options", 00:07:48.791 "accel_set_driver", 00:07:48.791 "accel_crypto_key_destroy", 00:07:48.791 "accel_crypto_keys_get", 00:07:48.791 "accel_crypto_key_create", 00:07:48.791 "accel_assign_opc", 00:07:48.791 "accel_get_module_info", 00:07:48.791 "accel_get_opc_assignments", 00:07:48.791 "notify_get_notifications", 00:07:48.791 "notify_get_types", 00:07:48.791 "bdev_get_histogram", 00:07:48.791 "bdev_enable_histogram", 00:07:48.791 "bdev_set_qos_limit", 00:07:48.791 "bdev_set_qd_sampling_period", 00:07:48.791 "bdev_get_bdevs", 00:07:48.791 "bdev_reset_iostat", 00:07:48.791 "bdev_get_iostat", 00:07:48.791 "bdev_examine", 00:07:48.791 "bdev_wait_for_examine", 00:07:48.791 "bdev_set_options", 00:07:48.791 "scsi_get_devices", 00:07:48.791 "thread_set_cpumask", 00:07:48.791 "framework_get_scheduler", 00:07:48.791 "framework_set_scheduler", 00:07:48.791 "framework_get_reactors", 00:07:48.791 "thread_get_io_channels", 00:07:48.791 "thread_get_pollers", 00:07:48.791 "thread_get_stats", 00:07:48.791 "framework_monitor_context_switch", 00:07:48.791 "spdk_kill_instance", 00:07:48.791 "log_enable_timestamps", 00:07:48.791 "log_get_flags", 00:07:48.791 "log_clear_flag", 00:07:48.791 "log_set_flag", 00:07:48.791 "log_get_level", 00:07:48.791 "log_set_level", 00:07:48.791 "log_get_print_level", 00:07:48.791 "log_set_print_level", 00:07:48.791 "framework_enable_cpumask_locks", 00:07:48.791 "framework_disable_cpumask_locks", 00:07:48.791 "framework_wait_init", 00:07:48.791 "framework_start_init", 00:07:48.791 "virtio_blk_create_transport", 00:07:48.791 "virtio_blk_get_transports", 00:07:48.791 "vhost_controller_set_coalescing", 00:07:48.791 "vhost_get_controllers", 00:07:48.791 "vhost_delete_controller", 00:07:48.791 "vhost_create_blk_controller", 00:07:48.791 "vhost_scsi_controller_remove_target", 00:07:48.791 "vhost_scsi_controller_add_target", 00:07:48.791 "vhost_start_scsi_controller", 00:07:48.791 "vhost_create_scsi_controller", 00:07:48.791 "nbd_get_disks", 00:07:48.791 "nbd_stop_disk", 00:07:48.791 "nbd_start_disk", 00:07:48.791 "env_dpdk_get_mem_stats", 00:07:48.791 "nvmf_subsystem_get_listeners", 00:07:48.791 "nvmf_subsystem_get_qpairs", 00:07:48.791 "nvmf_subsystem_get_controllers", 00:07:48.791 "nvmf_get_stats", 00:07:48.791 "nvmf_get_transports", 00:07:48.791 "nvmf_create_transport", 00:07:48.791 "nvmf_get_targets", 00:07:48.791 "nvmf_delete_target", 00:07:48.791 "nvmf_create_target", 00:07:48.791 "nvmf_subsystem_allow_any_host", 00:07:48.791 "nvmf_subsystem_remove_host", 00:07:48.791 "nvmf_subsystem_add_host", 00:07:48.791 "nvmf_ns_remove_host", 00:07:48.791 "nvmf_ns_add_host", 00:07:48.791 "nvmf_subsystem_remove_ns", 00:07:48.791 "nvmf_subsystem_add_ns", 00:07:48.791 "nvmf_subsystem_listener_set_ana_state", 00:07:48.791 "nvmf_discovery_get_referrals", 00:07:48.791 "nvmf_discovery_remove_referral", 00:07:48.791 "nvmf_discovery_add_referral", 00:07:48.791 "nvmf_subsystem_remove_listener", 00:07:48.791 "nvmf_subsystem_add_listener", 00:07:48.791 "nvmf_delete_subsystem", 00:07:48.791 "nvmf_create_subsystem", 00:07:48.791 "nvmf_get_subsystems", 00:07:48.791 "nvmf_set_crdt", 00:07:48.791 "nvmf_set_config", 00:07:48.791 "nvmf_set_max_subsystems", 00:07:48.791 "iscsi_get_histogram", 00:07:48.791 "iscsi_enable_histogram", 00:07:48.791 "iscsi_set_options", 00:07:48.791 "iscsi_get_auth_groups", 00:07:48.791 "iscsi_auth_group_remove_secret", 00:07:48.791 "iscsi_auth_group_add_secret", 00:07:48.791 "iscsi_delete_auth_group", 00:07:48.791 "iscsi_create_auth_group", 00:07:48.791 "iscsi_set_discovery_auth", 00:07:48.791 "iscsi_get_options", 00:07:48.791 "iscsi_target_node_request_logout", 00:07:48.791 "iscsi_target_node_set_redirect", 00:07:48.791 "iscsi_target_node_set_auth", 00:07:48.791 "iscsi_target_node_add_lun", 00:07:48.791 "iscsi_get_stats", 00:07:48.791 "iscsi_get_connections", 00:07:48.791 "iscsi_portal_group_set_auth", 00:07:48.791 "iscsi_start_portal_group", 00:07:48.791 "iscsi_delete_portal_group", 00:07:48.791 "iscsi_create_portal_group", 00:07:48.791 "iscsi_get_portal_groups", 00:07:48.791 "iscsi_delete_target_node", 00:07:48.791 "iscsi_target_node_remove_pg_ig_maps", 00:07:48.791 "iscsi_target_node_add_pg_ig_maps", 00:07:48.791 "iscsi_create_target_node", 00:07:48.791 "iscsi_get_target_nodes", 00:07:48.791 "iscsi_delete_initiator_group", 00:07:48.791 "iscsi_initiator_group_remove_initiators", 00:07:48.791 "iscsi_initiator_group_add_initiators", 00:07:48.791 "iscsi_create_initiator_group", 00:07:48.791 "iscsi_get_initiator_groups", 00:07:48.791 "keyring_file_remove_key", 00:07:48.791 "keyring_file_add_key", 00:07:48.791 "iaa_scan_accel_module", 00:07:48.791 "dsa_scan_accel_module", 00:07:48.791 "ioat_scan_accel_module", 00:07:48.791 "accel_error_inject_error", 00:07:48.791 "bdev_daos_resize", 00:07:48.791 "bdev_daos_delete", 00:07:48.792 "bdev_daos_create", 00:07:48.792 "bdev_virtio_attach_controller", 00:07:48.792 "bdev_virtio_scsi_get_devices", 00:07:48.792 "bdev_virtio_detach_controller", 00:07:48.792 "bdev_virtio_blk_set_hotplug", 00:07:48.792 "bdev_ftl_set_property", 00:07:48.792 "bdev_ftl_get_properties", 00:07:48.792 "bdev_ftl_get_stats", 00:07:48.792 "bdev_ftl_unmap", 00:07:48.792 "bdev_ftl_unload", 00:07:48.792 "bdev_ftl_delete", 00:07:48.792 "bdev_ftl_load", 00:07:48.792 "bdev_ftl_create", 00:07:48.792 "bdev_aio_delete", 00:07:48.792 "bdev_aio_rescan", 00:07:48.792 "bdev_aio_create", 00:07:48.792 "blobfs_create", 00:07:48.792 "blobfs_detect", 00:07:48.792 "blobfs_set_cache_size", 00:07:48.792 "bdev_zone_block_delete", 00:07:48.792 "bdev_zone_block_create", 00:07:48.792 "bdev_delay_delete", 00:07:48.792 "bdev_delay_create", 00:07:48.792 "bdev_delay_update_latency", 00:07:48.792 "bdev_split_delete", 00:07:48.792 "bdev_split_create", 00:07:48.792 "bdev_error_inject_error", 00:07:48.792 "bdev_error_delete", 00:07:48.792 "bdev_error_create", 00:07:48.792 "bdev_raid_set_options", 00:07:48.792 "bdev_raid_remove_base_bdev", 00:07:48.792 "bdev_raid_add_base_bdev", 00:07:48.792 "bdev_raid_delete", 00:07:48.792 "bdev_raid_create", 00:07:48.792 "bdev_raid_get_bdevs", 00:07:48.792 "bdev_lvol_check_shallow_copy", 00:07:48.792 "bdev_lvol_start_shallow_copy", 00:07:48.792 "bdev_lvol_grow_lvstore", 00:07:48.792 "bdev_lvol_get_lvols", 00:07:48.792 "bdev_lvol_get_lvstores", 00:07:48.792 "bdev_lvol_delete", 00:07:48.792 "bdev_lvol_set_read_only", 00:07:48.792 "bdev_lvol_resize", 00:07:48.792 "bdev_lvol_decouple_parent", 00:07:48.792 "bdev_lvol_inflate", 00:07:48.792 "bdev_lvol_rename", 00:07:48.792 "bdev_lvol_clone_bdev", 00:07:48.792 "bdev_lvol_clone", 00:07:48.792 "bdev_lvol_snapshot", 00:07:48.792 "bdev_lvol_create", 00:07:48.792 "bdev_lvol_delete_lvstore", 00:07:48.792 "bdev_lvol_rename_lvstore", 00:07:48.792 "bdev_lvol_create_lvstore", 00:07:48.792 "bdev_passthru_delete", 00:07:48.792 "bdev_passthru_create", 00:07:48.792 "bdev_nvme_cuse_unregister", 00:07:48.792 "bdev_nvme_cuse_register", 00:07:48.792 "bdev_opal_new_user", 00:07:48.792 "bdev_opal_set_lock_state", 00:07:48.792 "bdev_opal_delete", 00:07:48.792 "bdev_opal_get_info", 00:07:48.792 "bdev_opal_create", 00:07:48.792 "bdev_nvme_opal_revert", 00:07:48.792 "bdev_nvme_opal_init", 00:07:48.792 "bdev_nvme_send_cmd", 00:07:48.792 "bdev_nvme_get_path_iostat", 00:07:48.792 "bdev_nvme_get_mdns_discovery_info", 00:07:48.792 "bdev_nvme_stop_mdns_discovery", 00:07:48.792 "bdev_nvme_start_mdns_discovery", 00:07:48.792 "bdev_nvme_set_multipath_policy", 00:07:48.792 "bdev_nvme_set_preferred_path", 00:07:48.792 "bdev_nvme_get_io_paths", 00:07:48.792 "bdev_nvme_remove_error_injection", 00:07:48.792 "bdev_nvme_add_error_injection", 00:07:48.792 "bdev_nvme_get_discovery_info", 00:07:48.792 "bdev_nvme_stop_discovery", 00:07:48.792 "bdev_nvme_start_discovery", 00:07:48.792 "bdev_nvme_get_controller_health_info", 00:07:48.792 "bdev_nvme_disable_controller", 00:07:48.792 "bdev_nvme_enable_controller", 00:07:48.792 "bdev_nvme_reset_controller", 00:07:48.792 "bdev_nvme_get_transport_statistics", 00:07:48.792 "bdev_nvme_apply_firmware", 00:07:48.792 "bdev_nvme_detach_controller", 00:07:48.792 "bdev_nvme_get_controllers", 00:07:48.792 "bdev_nvme_attach_controller", 00:07:48.792 "bdev_nvme_set_hotplug", 00:07:48.792 "bdev_nvme_set_options", 00:07:48.792 "bdev_null_resize", 00:07:48.792 "bdev_null_delete", 00:07:48.792 "bdev_null_create", 00:07:48.792 "bdev_malloc_delete", 00:07:48.792 "bdev_malloc_create" 00:07:48.792 ] 00:07:49.050 23:23:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.050 23:23:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:49.050 23:23:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 47446 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 47446 ']' 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 47446 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47446 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.050 killing process with pid 47446 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47446' 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 47446 00:07:49.050 23:23:12 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 47446 00:07:51.577 ************************************ 00:07:51.577 END TEST spdkcli_tcp 00:07:51.577 ************************************ 00:07:51.577 00:07:51.577 real 0m3.833s 00:07:51.577 user 0m6.732s 00:07:51.577 sys 0m0.547s 00:07:51.577 23:23:14 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.577 23:23:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.577 23:23:14 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:51.577 23:23:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:51.577 23:23:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.577 23:23:14 -- common/autotest_common.sh@10 -- # set +x 00:07:51.577 ************************************ 00:07:51.577 START TEST dpdk_mem_utility 00:07:51.577 ************************************ 00:07:51.577 23:23:14 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:51.577 * Looking for test storage... 00:07:51.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:51.577 23:23:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:51.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.577 23:23:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=47575 00:07:51.577 23:23:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 47575 00:07:51.577 23:23:14 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 47575 ']' 00:07:51.577 23:23:14 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.577 23:23:14 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.577 23:23:14 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.577 23:23:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:51.577 23:23:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.577 23:23:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:51.577 [2024-05-14 23:23:14.592120] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:51.578 [2024-05-14 23:23:14.592360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47575 ] 00:07:51.578 [2024-05-14 23:23:14.754117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.835 [2024-05-14 23:23:14.989998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.772 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.772 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:07:52.772 23:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:52.772 23:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:52.772 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.772 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:52.772 { 00:07:52.772 "filename": "/tmp/spdk_mem_dump.txt" 00:07:52.772 } 00:07:52.772 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.772 23:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:52.772 DPDK memory size 868.000000 MiB in 1 heap(s) 00:07:52.772 1 heaps totaling size 868.000000 MiB 00:07:52.772 size: 868.000000 MiB heap id: 0 00:07:52.772 end heaps---------- 00:07:52.772 8 mempools totaling size 646.224487 MiB 00:07:52.772 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:52.772 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:52.772 size: 132.629456 MiB name: bdev_io_47575 00:07:52.772 size: 51.011292 MiB name: evtpool_47575 00:07:52.772 size: 50.003479 MiB name: msgpool_47575 00:07:52.772 size: 21.763794 MiB name: PDU_Pool 00:07:52.772 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:52.772 size: 0.026123 MiB name: Session_Pool 00:07:52.772 end mempools------- 00:07:52.772 6 memzones totaling size 4.142822 MiB 00:07:52.772 size: 1.000366 MiB name: RG_ring_0_47575 00:07:52.772 size: 1.000366 MiB name: RG_ring_1_47575 00:07:52.772 size: 1.000366 MiB name: RG_ring_4_47575 00:07:52.772 size: 1.000366 MiB name: RG_ring_5_47575 00:07:52.772 size: 0.125366 MiB name: RG_ring_2_47575 00:07:52.772 size: 0.015991 MiB name: RG_ring_3_47575 00:07:52.772 end memzones------- 00:07:52.772 23:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:52.772 heap id: 0 total size: 868.000000 MiB number of busy elements: 276 number of free elements: 18 00:07:52.772 list of free elements. size: 18.348999 MiB 00:07:52.772 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:52.772 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:52.772 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:52.772 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:52.772 element at address: 0x20001c100040 with size: 0.999939 MiB 00:07:52.772 element at address: 0x20001c500040 with size: 0.999939 MiB 00:07:52.772 element at address: 0x20001c600000 with size: 0.999084 MiB 00:07:52.772 element at address: 0x200003e00000 with size: 0.996094 MiB 00:07:52.772 element at address: 0x200035200000 with size: 0.994324 MiB 00:07:52.772 element at address: 0x20001be00000 with size: 0.959656 MiB 00:07:52.772 element at address: 0x20001c900040 with size: 0.936401 MiB 00:07:52.772 element at address: 0x200000200000 with size: 0.831177 MiB 00:07:52.772 element at address: 0x20001e000000 with size: 0.563171 MiB 00:07:52.772 element at address: 0x20001c200000 with size: 0.487976 MiB 00:07:52.772 element at address: 0x20001ca00000 with size: 0.485413 MiB 00:07:52.772 element at address: 0x20002b400000 with size: 0.397766 MiB 00:07:52.772 element at address: 0x200013800000 with size: 0.360474 MiB 00:07:52.772 element at address: 0x200003a00000 with size: 0.349304 MiB 00:07:52.772 list of standard malloc elements. size: 199.278198 MiB 00:07:52.772 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:52.772 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:52.772 element at address: 0x20001bffff80 with size: 1.000183 MiB 00:07:52.772 element at address: 0x20001c3fff80 with size: 1.000183 MiB 00:07:52.772 element at address: 0x20001c7fff80 with size: 1.000183 MiB 00:07:52.772 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:52.772 element at address: 0x20001c9eff40 with size: 0.062683 MiB 00:07:52.772 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:52.772 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:07:52.772 element at address: 0x20001c9efdc0 with size: 0.000366 MiB 00:07:52.772 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:52.773 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a596c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a597c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a598c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a599c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a59ac0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a59bc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a59cc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a59dc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a59ec0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a59fc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a0c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001385c480 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001385c580 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001385c680 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001385c780 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001385c880 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001385c980 with size: 0.000244 MiB 00:07:52.773 element at address: 0x2000138dccc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001befdd00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27cec0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27cfc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d0c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d1c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d2c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d3c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d4c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d5c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d6c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d7c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d8c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c27d9c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c2fdd00 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c6ffc40 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c9efbc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001c9efcc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001cabc680 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0902c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0903c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0904c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0905c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0906c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0907c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0908c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0909c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e090ac0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e090bc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e090cc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e090dc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e090ec0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e090fc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0910c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0911c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0912c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0913c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0914c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0915c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0916c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0917c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0918c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e0919c0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e091ac0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e091bc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e091cc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e091dc0 with size: 0.000244 MiB 00:07:52.773 element at address: 0x20001e091ec0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e091fc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0920c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0921c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0922c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0923c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0924c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0925c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0926c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0927c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0928c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0929c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e092ac0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e092bc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e092cc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e092dc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e092ec0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e092fc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0930c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0931c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0932c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0933c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0934c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0935c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0936c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0937c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0938c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0939c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e093ac0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e093bc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e093cc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e093dc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e093ec0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e093fc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0940c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0941c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0942c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0943c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0944c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0945c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0946c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0947c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0948c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0949c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e094ac0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e094bc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e094cc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e094dc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e094ec0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e094fc0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0950c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0951c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0952c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20001e0953c0 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b465d40 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b465e40 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46cb00 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46cd80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46ce80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46cf80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d080 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d180 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d280 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d380 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d480 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d580 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d680 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d780 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d880 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46d980 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46da80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46db80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46dc80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46dd80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46de80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46df80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e080 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e180 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e280 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e380 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e480 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e580 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e680 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e780 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e880 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46e980 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46ea80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46eb80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46ec80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46ed80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46ee80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46ef80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f080 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f180 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f280 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f380 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f480 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f580 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f680 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f780 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f880 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46f980 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46fa80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46fb80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46fc80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46fd80 with size: 0.000244 MiB 00:07:52.774 element at address: 0x20002b46fe80 with size: 0.000244 MiB 00:07:52.774 list of memzone associated elements. size: 650.372803 MiB 00:07:52.774 element at address: 0x20001e0954c0 with size: 211.416809 MiB 00:07:52.774 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:52.774 element at address: 0x20002b46ff80 with size: 157.562622 MiB 00:07:52.774 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:52.774 element at address: 0x2000139def40 with size: 132.129089 MiB 00:07:52.774 associated memzone info: size: 132.128906 MiB name: MP_bdev_io_47575_0 00:07:52.774 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:52.774 associated memzone info: size: 48.002930 MiB name: MP_evtpool_47575_0 00:07:52.774 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:52.774 associated memzone info: size: 48.002930 MiB name: MP_msgpool_47575_0 00:07:52.774 element at address: 0x20001cbbe900 with size: 20.255615 MiB 00:07:52.774 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:52.774 element at address: 0x2000353feb00 with size: 18.005127 MiB 00:07:52.774 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:52.774 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:52.774 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_47575 00:07:52.774 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:52.774 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_47575 00:07:52.774 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:52.774 associated memzone info: size: 1.007996 MiB name: MP_evtpool_47575 00:07:52.774 element at address: 0x20001c2fde00 with size: 1.008179 MiB 00:07:52.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:52.774 element at address: 0x20001cabc780 with size: 1.008179 MiB 00:07:52.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:52.774 element at address: 0x20001befde00 with size: 1.008179 MiB 00:07:52.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:52.774 element at address: 0x2000138dcdc0 with size: 1.008179 MiB 00:07:52.774 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:52.774 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:52.774 associated memzone info: size: 1.000366 MiB name: RG_ring_0_47575 00:07:52.774 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:52.774 associated memzone info: size: 1.000366 MiB name: RG_ring_1_47575 00:07:52.774 element at address: 0x20001c6ffd40 with size: 1.000549 MiB 00:07:52.774 associated memzone info: size: 1.000366 MiB name: RG_ring_4_47575 00:07:52.774 element at address: 0x2000352fe8c0 with size: 1.000549 MiB 00:07:52.774 associated memzone info: size: 1.000366 MiB name: RG_ring_5_47575 00:07:52.774 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:07:52.774 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_47575 00:07:52.774 element at address: 0x20001c27dac0 with size: 0.500549 MiB 00:07:52.774 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:52.774 element at address: 0x20001385ca80 with size: 0.500549 MiB 00:07:52.774 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:52.774 element at address: 0x20001ca7c440 with size: 0.250549 MiB 00:07:52.774 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:52.774 element at address: 0x200003adf740 with size: 0.125549 MiB 00:07:52.774 associated memzone info: size: 0.125366 MiB name: RG_ring_2_47575 00:07:52.774 element at address: 0x20001bef5ac0 with size: 0.031799 MiB 00:07:52.775 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:52.775 element at address: 0x20002b465f40 with size: 0.023804 MiB 00:07:52.775 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:52.775 element at address: 0x200003adb500 with size: 0.016174 MiB 00:07:52.775 associated memzone info: size: 0.015991 MiB name: RG_ring_3_47575 00:07:52.775 element at address: 0x20002b46c0c0 with size: 0.002502 MiB 00:07:52.775 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:52.775 element at address: 0x2000002d6780 with size: 0.000366 MiB 00:07:52.775 associated memzone info: size: 0.000183 MiB name: MP_msgpool_47575 00:07:52.775 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:52.775 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_47575 00:07:52.775 element at address: 0x20002b46cc00 with size: 0.000366 MiB 00:07:52.775 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:52.775 23:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:52.775 23:23:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 47575 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 47575 ']' 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 47575 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47575 00:07:52.775 killing process with pid 47575 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47575' 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 47575 00:07:52.775 23:23:15 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 47575 00:07:55.305 00:07:55.305 real 0m3.804s 00:07:55.305 user 0m3.657s 00:07:55.305 sys 0m0.539s 00:07:55.305 23:23:18 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.305 23:23:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:55.305 ************************************ 00:07:55.305 END TEST dpdk_mem_utility 00:07:55.305 ************************************ 00:07:55.305 23:23:18 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:55.305 23:23:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:55.305 23:23:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.305 23:23:18 -- common/autotest_common.sh@10 -- # set +x 00:07:55.305 ************************************ 00:07:55.305 START TEST event 00:07:55.305 ************************************ 00:07:55.305 23:23:18 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:55.305 * Looking for test storage... 00:07:55.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:55.305 23:23:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:55.305 23:23:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:55.305 23:23:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:55.305 23:23:18 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:55.305 23:23:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.305 23:23:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.305 ************************************ 00:07:55.305 START TEST event_perf 00:07:55.305 ************************************ 00:07:55.305 23:23:18 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:55.305 Running I/O for 1 seconds...[2024-05-14 23:23:18.374749] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:55.305 [2024-05-14 23:23:18.375088] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47704 ] 00:07:55.305 [2024-05-14 23:23:18.551705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.565 [2024-05-14 23:23:18.773770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.565 [2024-05-14 23:23:18.773886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.565 [2024-05-14 23:23:18.774108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.565 Running I/O for 1 seconds...[2024-05-14 23:23:18.774027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.944 00:07:56.944 lcore 0: 288466 00:07:56.944 lcore 1: 288465 00:07:56.944 lcore 2: 288461 00:07:56.944 lcore 3: 288465 00:07:56.944 done. 00:07:56.944 ************************************ 00:07:56.944 END TEST event_perf 00:07:56.944 ************************************ 00:07:56.944 00:07:56.944 real 0m1.815s 00:07:56.944 user 0m4.601s 00:07:56.944 sys 0m0.110s 00:07:56.944 23:23:20 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.944 23:23:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:56.944 23:23:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:56.944 23:23:20 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:56.944 23:23:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.944 23:23:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.944 ************************************ 00:07:56.944 START TEST event_reactor 00:07:56.944 ************************************ 00:07:56.944 23:23:20 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:57.202 [2024-05-14 23:23:20.232717] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:57.202 [2024-05-14 23:23:20.232939] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47758 ] 00:07:57.202 [2024-05-14 23:23:20.401447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.461 [2024-05-14 23:23:20.617897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.835 test_start 00:07:58.835 oneshot 00:07:58.835 tick 100 00:07:58.835 tick 100 00:07:58.835 tick 250 00:07:58.835 tick 100 00:07:58.835 tick 100 00:07:58.835 tick 100 00:07:58.835 tick 250 00:07:58.835 tick 500 00:07:58.835 tick 100 00:07:58.835 tick 100 00:07:58.835 tick 250 00:07:58.835 tick 100 00:07:58.835 tick 100 00:07:58.835 test_end 00:07:58.835 ************************************ 00:07:58.835 END TEST event_reactor 00:07:58.835 ************************************ 00:07:58.835 00:07:58.835 real 0m1.806s 00:07:58.835 user 0m1.584s 00:07:58.835 sys 0m0.121s 00:07:58.835 23:23:22 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.835 23:23:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:58.835 23:23:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:58.835 23:23:22 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:58.835 23:23:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.835 23:23:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.835 ************************************ 00:07:58.835 START TEST event_reactor_perf 00:07:58.835 ************************************ 00:07:58.835 23:23:22 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:58.835 [2024-05-14 23:23:22.080385] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:07:58.835 [2024-05-14 23:23:22.080601] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47802 ] 00:07:59.094 [2024-05-14 23:23:22.236984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.352 [2024-05-14 23:23:22.459131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.727 test_start 00:08:00.727 test_end 00:08:00.727 Performance: 595172 events per second 00:08:00.727 ************************************ 00:08:00.727 END TEST event_reactor_perf 00:08:00.727 ************************************ 00:08:00.727 00:08:00.727 real 0m1.759s 00:08:00.727 user 0m1.567s 00:08:00.727 sys 0m0.092s 00:08:00.727 23:23:23 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.727 23:23:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:00.727 23:23:23 event -- event/event.sh@49 -- # uname -s 00:08:00.727 23:23:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:00.727 23:23:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:00.727 23:23:23 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:00.727 23:23:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.727 23:23:23 event -- common/autotest_common.sh@10 -- # set +x 00:08:00.727 ************************************ 00:08:00.727 START TEST event_scheduler 00:08:00.727 ************************************ 00:08:00.727 23:23:23 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:00.728 * Looking for test storage... 00:08:00.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:00.728 23:23:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:00.728 23:23:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=47891 00:08:00.728 23:23:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.728 23:23:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:00.728 23:23:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 47891 00:08:00.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.728 23:23:23 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 47891 ']' 00:08:00.728 23:23:23 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.728 23:23:23 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:00.728 23:23:23 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.728 23:23:23 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:00.728 23:23:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:00.986 [2024-05-14 23:23:24.095463] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:00.986 [2024-05-14 23:23:24.095705] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47891 ] 00:08:00.986 [2024-05-14 23:23:24.251647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.243 [2024-05-14 23:23:24.464502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.243 [2024-05-14 23:23:24.464661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.243 [2024-05-14 23:23:24.464546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.243 [2024-05-14 23:23:24.464657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.808 23:23:24 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.808 23:23:24 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:08:01.808 23:23:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:01.808 23:23:24 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.808 23:23:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:01.808 POWER: Env isn't set yet! 00:08:01.808 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:01.808 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:01.808 POWER: Cannot set governor of lcore 0 to userspace 00:08:01.808 POWER: Attempting to initialise PSTAT power management... 00:08:01.808 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:01.808 POWER: Cannot set governor of lcore 0 to performance 00:08:01.808 POWER: Attempting to initialise AMD PSTATE power management... 00:08:01.808 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:01.808 POWER: Cannot set governor of lcore 0 to userspace 00:08:01.808 POWER: Attempting to initialise CPPC power management... 00:08:01.808 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:01.808 POWER: Cannot set governor of lcore 0 to userspace 00:08:01.808 POWER: Attempting to initialise VM power management... 00:08:01.808 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:01.808 POWER: Unable to set Power Management Environment for lcore 0 00:08:01.808 [2024-05-14 23:23:24.935404] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:01.808 [2024-05-14 23:23:24.935436] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:01.808 [2024-05-14 23:23:24.935474] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:01.808 23:23:24 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.808 23:23:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:01.808 23:23:24 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.808 23:23:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:02.066 [2024-05-14 23:23:25.308871] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:02.066 23:23:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.066 23:23:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:02.066 23:23:25 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:02.066 23:23:25 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.066 23:23:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:02.066 ************************************ 00:08:02.066 START TEST scheduler_create_thread 00:08:02.066 ************************************ 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.066 2 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.066 3 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.066 4 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.066 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 5 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 6 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 7 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 8 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 9 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 10 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.325 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.890 ************************************ 00:08:02.890 END TEST scheduler_create_thread 00:08:02.890 ************************************ 00:08:02.890 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.890 00:08:02.890 real 0m0.597s 00:08:02.890 user 0m0.007s 00:08:02.890 sys 0m0.005s 00:08:02.890 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.890 23:23:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.890 23:23:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:02.890 23:23:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 47891 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 47891 ']' 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 47891 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 47891 00:08:02.890 killing process with pid 47891 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47891' 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 47891 00:08:02.890 23:23:25 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 47891 00:08:03.148 [2024-05-14 23:23:26.398841] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:04.522 00:08:04.522 real 0m3.772s 00:08:04.522 user 0m6.799s 00:08:04.522 sys 0m0.429s 00:08:04.522 23:23:27 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.522 23:23:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:04.522 ************************************ 00:08:04.522 END TEST event_scheduler 00:08:04.522 ************************************ 00:08:04.522 23:23:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:04.522 modprobe: FATAL: Module nbd not found. 00:08:04.522 23:23:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:04.522 23:23:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:04.522 23:23:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:04.522 23:23:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.522 23:23:27 event -- common/autotest_common.sh@10 -- # set +x 00:08:04.522 ************************************ 00:08:04.522 START TEST cpu_locks 00:08:04.522 ************************************ 00:08:04.522 23:23:27 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:04.522 * Looking for test storage... 00:08:04.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:04.522 23:23:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:04.522 23:23:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:04.522 23:23:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:04.522 23:23:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:04.522 23:23:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:04.522 23:23:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.522 23:23:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.522 ************************************ 00:08:04.522 START TEST default_locks 00:08:04.522 ************************************ 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=48040 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 48040 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 48040 ']' 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.522 23:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:04.826 [2024-05-14 23:23:27.918409] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:04.826 [2024-05-14 23:23:27.918610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48040 ] 00:08:04.826 [2024-05-14 23:23:28.072201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.107 [2024-05-14 23:23:28.274783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.038 23:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:06.038 23:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:08:06.038 23:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 48040 00:08:06.038 23:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 48040 00:08:06.038 23:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 48040 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 48040 ']' 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 48040 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48040 00:08:06.971 killing process with pid 48040 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48040' 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 48040 00:08:06.971 23:23:30 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 48040 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 48040 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 48040 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:09.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.507 ERROR: process (pid: 48040) is no longer running 00:08:09.507 ************************************ 00:08:09.507 END TEST default_locks 00:08:09.507 ************************************ 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 48040 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 48040 ']' 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.507 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (48040) - No such process 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:09.507 00:08:09.507 real 0m4.478s 00:08:09.507 user 0m4.500s 00:08:09.507 sys 0m1.169s 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.507 23:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.507 23:23:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:09.507 23:23:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:09.507 23:23:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.507 23:23:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.507 ************************************ 00:08:09.507 START TEST default_locks_via_rpc 00:08:09.507 ************************************ 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=48131 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 48131 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 48131 ']' 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:09.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:09.507 23:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.507 [2024-05-14 23:23:32.455996] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:09.507 [2024-05-14 23:23:32.456511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48131 ] 00:08:09.507 [2024-05-14 23:23:32.607260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.765 [2024-05-14 23:23:32.812931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 48131 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 48131 00:08:10.698 23:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 48131 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 48131 ']' 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 48131 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48131 00:08:11.631 killing process with pid 48131 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48131' 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 48131 00:08:11.631 23:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 48131 00:08:14.158 ************************************ 00:08:14.158 END TEST default_locks_via_rpc 00:08:14.158 ************************************ 00:08:14.158 00:08:14.158 real 0m4.515s 00:08:14.158 user 0m4.516s 00:08:14.158 sys 0m1.181s 00:08:14.158 23:23:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.158 23:23:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.158 23:23:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:14.158 23:23:36 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:14.158 23:23:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.158 23:23:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.158 ************************************ 00:08:14.158 START TEST non_locking_app_on_locked_coremask 00:08:14.158 ************************************ 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=48215 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 48215 /var/tmp/spdk.sock 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48215 ']' 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:14.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:14.158 23:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:14.158 [2024-05-14 23:23:37.025693] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:14.158 [2024-05-14 23:23:37.025867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48215 ] 00:08:14.158 [2024-05-14 23:23:37.182624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.158 [2024-05-14 23:23:37.407173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=48242 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 48242 /var/tmp/spdk2.sock 00:08:15.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48242 ']' 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:15.091 23:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.349 [2024-05-14 23:23:38.407919] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:15.349 [2024-05-14 23:23:38.408146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48242 ] 00:08:15.349 [2024-05-14 23:23:38.577650] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:15.349 [2024-05-14 23:23:38.577717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.915 [2024-05-14 23:23:39.019220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.813 23:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:17.813 23:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:17.813 23:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 48215 00:08:17.813 23:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:17.813 23:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 48215 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 48215 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48215 ']' 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 48215 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48215 00:08:19.712 killing process with pid 48215 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48215' 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 48215 00:08:19.712 23:23:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 48215 00:08:23.892 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 48242 00:08:23.892 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48242 ']' 00:08:23.892 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 48242 00:08:23.892 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:08:23.892 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:23.892 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48242 00:08:24.150 killing process with pid 48242 00:08:24.150 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:24.150 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:24.150 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48242' 00:08:24.150 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 48242 00:08:24.150 23:23:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 48242 00:08:26.740 ************************************ 00:08:26.740 END TEST non_locking_app_on_locked_coremask 00:08:26.740 ************************************ 00:08:26.740 00:08:26.740 real 0m12.568s 00:08:26.740 user 0m13.145s 00:08:26.740 sys 0m2.306s 00:08:26.740 23:23:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.740 23:23:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.740 23:23:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:26.740 23:23:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.740 23:23:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.740 23:23:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:26.740 ************************************ 00:08:26.740 START TEST locking_app_on_unlocked_coremask 00:08:26.740 ************************************ 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=48424 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 48424 /var/tmp/spdk.sock 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48424 ']' 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:26.740 23:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.740 [2024-05-14 23:23:49.645872] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:26.740 [2024-05-14 23:23:49.646083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48424 ] 00:08:26.740 [2024-05-14 23:23:49.809981] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:26.740 [2024-05-14 23:23:49.810088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.740 [2024-05-14 23:23:50.020963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=48444 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 48444 /var/tmp/spdk2.sock 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48444 ']' 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.675 23:23:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:27.933 [2024-05-14 23:23:51.021885] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:27.933 [2024-05-14 23:23:51.022070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48444 ] 00:08:27.933 [2024-05-14 23:23:51.180538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.499 [2024-05-14 23:23:51.628116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.400 23:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:30.400 23:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:30.400 23:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 48444 00:08:30.400 23:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:30.400 23:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 48444 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 48424 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48424 ']' 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 48424 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48424 00:08:32.300 killing process with pid 48424 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48424' 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 48424 00:08:32.300 23:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 48424 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 48444 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48444 ']' 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 48444 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48444 00:08:37.565 killing process with pid 48444 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48444' 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 48444 00:08:37.565 23:23:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 48444 00:08:38.935 ************************************ 00:08:38.935 END TEST locking_app_on_unlocked_coremask 00:08:38.935 ************************************ 00:08:38.935 00:08:38.935 real 0m12.610s 00:08:38.935 user 0m13.109s 00:08:38.935 sys 0m2.338s 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:38.935 23:24:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:38.935 23:24:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:38.935 23:24:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.935 23:24:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.935 ************************************ 00:08:38.935 START TEST locking_app_on_locked_coremask 00:08:38.935 ************************************ 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:08:38.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=48615 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 48615 /var/tmp/spdk.sock 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48615 ']' 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:38.935 23:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.192 [2024-05-14 23:24:02.300058] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:39.192 [2024-05-14 23:24:02.300564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48615 ] 00:08:39.192 [2024-05-14 23:24:02.469962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.450 [2024-05-14 23:24:02.687743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=48636 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 48636 /var/tmp/spdk2.sock 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 48636 /var/tmp/spdk2.sock 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:40.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 48636 /var/tmp/spdk2.sock 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 48636 ']' 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:40.384 23:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.641 [2024-05-14 23:24:03.707067] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:40.641 [2024-05-14 23:24:03.707285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48636 ] 00:08:40.641 [2024-05-14 23:24:03.877271] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 48615 has claimed it. 00:08:40.641 [2024-05-14 23:24:03.877387] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:41.206 ERROR: process (pid: 48636) is no longer running 00:08:41.206 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (48636) - No such process 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 48615 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 48615 00:08:41.206 23:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 48615 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 48615 ']' 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 48615 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48615 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:42.137 killing process with pid 48615 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48615' 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 48615 00:08:42.137 23:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 48615 00:08:44.730 ************************************ 00:08:44.730 END TEST locking_app_on_locked_coremask 00:08:44.730 ************************************ 00:08:44.730 00:08:44.730 real 0m5.461s 00:08:44.730 user 0m5.728s 00:08:44.730 sys 0m1.296s 00:08:44.730 23:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:44.730 23:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.730 23:24:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:44.730 23:24:07 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:44.730 23:24:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:44.730 23:24:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:44.730 ************************************ 00:08:44.730 START TEST locking_overlapped_coremask 00:08:44.730 ************************************ 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=48719 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 48719 /var/tmp/spdk.sock 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 48719 ']' 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:44.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:44.730 23:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.730 [2024-05-14 23:24:07.830042] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:44.730 [2024-05-14 23:24:07.830380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48719 ] 00:08:44.730 [2024-05-14 23:24:07.987443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.988 [2024-05-14 23:24:08.212212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.988 [2024-05-14 23:24:08.212349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.988 [2024-05-14 23:24:08.212354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=48751 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 48751 /var/tmp/spdk2.sock 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 48751 /var/tmp/spdk2.sock 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:45.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 48751 /var/tmp/spdk2.sock 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 48751 ']' 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:45.922 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:46.180 [2024-05-14 23:24:09.224705] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:46.180 [2024-05-14 23:24:09.224894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48751 ] 00:08:46.180 [2024-05-14 23:24:09.423476] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 48719 has claimed it. 00:08:46.180 [2024-05-14 23:24:09.423576] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:46.748 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (48751) - No such process 00:08:46.748 ERROR: process (pid: 48751) is no longer running 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 48719 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 48719 ']' 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 48719 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48719 00:08:46.748 killing process with pid 48719 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48719' 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 48719 00:08:46.748 23:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 48719 00:08:49.281 ************************************ 00:08:49.281 END TEST locking_overlapped_coremask 00:08:49.282 ************************************ 00:08:49.282 00:08:49.282 real 0m4.461s 00:08:49.282 user 0m11.660s 00:08:49.282 sys 0m0.573s 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.282 23:24:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:49.282 23:24:12 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:49.282 23:24:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:49.282 23:24:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.282 ************************************ 00:08:49.282 START TEST locking_overlapped_coremask_via_rpc 00:08:49.282 ************************************ 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=48820 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 48820 /var/tmp/spdk.sock 00:08:49.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 48820 ']' 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:49.282 23:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.282 [2024-05-14 23:24:12.324609] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:49.282 [2024-05-14 23:24:12.324794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48820 ] 00:08:49.282 [2024-05-14 23:24:12.477695] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:49.282 [2024-05-14 23:24:12.477786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:49.540 [2024-05-14 23:24:12.697590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.540 [2024-05-14 23:24:12.697735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.540 [2024-05-14 23:24:12.697746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=48843 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 48843 /var/tmp/spdk2.sock 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 48843 ']' 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:50.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.475 23:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:50.475 [2024-05-14 23:24:13.690143] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:50.475 [2024-05-14 23:24:13.690364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48843 ] 00:08:50.733 [2024-05-14 23:24:13.895047] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:50.733 [2024-05-14 23:24:13.895121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.301 [2024-05-14 23:24:14.316966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.301 [2024-05-14 23:24:14.327298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:51.301 [2024-05-14 23:24:14.338160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.202 [2024-05-14 23:24:16.197412] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 48820 has claimed it. 00:08:53.202 request: 00:08:53.202 { 00:08:53.202 "method": "framework_enable_cpumask_locks", 00:08:53.202 "req_id": 1 00:08:53.202 } 00:08:53.202 Got JSON-RPC error response 00:08:53.202 response: 00:08:53.202 { 00:08:53.202 "code": -32603, 00:08:53.202 "message": "Failed to claim CPU core: 2" 00:08:53.202 } 00:08:53.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 48820 /var/tmp/spdk.sock 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 48820 ']' 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 48843 /var/tmp/spdk2.sock 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 48843 ']' 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:53.202 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.461 ************************************ 00:08:53.461 END TEST locking_overlapped_coremask_via_rpc 00:08:53.461 ************************************ 00:08:53.461 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:53.461 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:53.461 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:53.461 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:53.461 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:53.461 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:53.461 00:08:53.461 real 0m4.428s 00:08:53.461 user 0m1.349s 00:08:53.461 sys 0m0.141s 00:08:53.461 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:53.461 23:24:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.461 23:24:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:53.461 23:24:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 48820 ]] 00:08:53.461 23:24:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 48820 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 48820 ']' 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 48820 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48820 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:53.461 killing process with pid 48820 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48820' 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 48820 00:08:53.461 23:24:16 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 48820 00:08:55.992 23:24:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 48843 ]] 00:08:55.992 23:24:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 48843 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 48843 ']' 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 48843 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48843 00:08:55.992 killing process with pid 48843 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48843' 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 48843 00:08:55.992 23:24:18 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 48843 00:08:57.890 23:24:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:57.890 Process with pid 48820 is not found 00:08:57.890 Process with pid 48843 is not found 00:08:57.890 23:24:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:57.890 23:24:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 48820 ]] 00:08:57.890 23:24:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 48820 00:08:57.890 23:24:21 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 48820 ']' 00:08:57.890 23:24:21 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 48820 00:08:57.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (48820) - No such process 00:08:57.890 23:24:21 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 48820 is not found' 00:08:57.890 23:24:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 48843 ]] 00:08:57.890 23:24:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 48843 00:08:57.890 23:24:21 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 48843 ']' 00:08:57.890 23:24:21 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 48843 00:08:57.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (48843) - No such process 00:08:57.890 23:24:21 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 48843 is not found' 00:08:57.890 23:24:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:57.890 00:08:57.890 real 0m53.420s 00:08:57.890 user 1m27.092s 00:08:57.890 sys 0m10.068s 00:08:57.890 23:24:21 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:57.890 23:24:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.890 ************************************ 00:08:57.890 END TEST cpu_locks 00:08:57.890 ************************************ 00:08:57.890 00:08:57.890 real 1m2.930s 00:08:57.890 user 1m41.761s 00:08:57.890 sys 0m11.015s 00:08:57.890 23:24:21 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:57.890 ************************************ 00:08:57.890 END TEST event 00:08:57.890 ************************************ 00:08:57.890 23:24:21 event -- common/autotest_common.sh@10 -- # set +x 00:08:58.152 23:24:21 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:58.152 23:24:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:58.152 23:24:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:58.152 23:24:21 -- common/autotest_common.sh@10 -- # set +x 00:08:58.152 ************************************ 00:08:58.152 START TEST thread 00:08:58.152 ************************************ 00:08:58.152 23:24:21 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:58.152 * Looking for test storage... 00:08:58.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:58.152 23:24:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:58.152 23:24:21 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:58.152 23:24:21 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:58.152 23:24:21 thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.152 ************************************ 00:08:58.152 START TEST thread_poller_perf 00:08:58.152 ************************************ 00:08:58.152 23:24:21 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:58.152 [2024-05-14 23:24:21.327944] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:08:58.152 [2024-05-14 23:24:21.328135] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49060 ] 00:08:58.411 [2024-05-14 23:24:21.497410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.669 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:58.669 [2024-05-14 23:24:21.720229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.043 ====================================== 00:09:00.043 busy:2205253660 (cyc) 00:09:00.043 total_run_count: 1329000 00:09:00.043 tsc_hz: 2200000000 (cyc) 00:09:00.043 ====================================== 00:09:00.043 poller_cost: 1659 (cyc), 754 (nsec) 00:09:00.043 ************************************ 00:09:00.043 END TEST thread_poller_perf 00:09:00.043 ************************************ 00:09:00.043 00:09:00.043 real 0m1.771s 00:09:00.043 user 0m1.568s 00:09:00.043 sys 0m0.102s 00:09:00.043 23:24:23 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:00.043 23:24:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:00.043 23:24:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:00.043 23:24:23 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:09:00.043 23:24:23 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:00.043 23:24:23 thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.043 ************************************ 00:09:00.043 START TEST thread_poller_perf 00:09:00.043 ************************************ 00:09:00.043 23:24:23 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:00.043 [2024-05-14 23:24:23.148593] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:00.043 [2024-05-14 23:24:23.148818] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49104 ] 00:09:00.043 [2024-05-14 23:24:23.312284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.302 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:00.302 [2024-05-14 23:24:23.527575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.678 ====================================== 00:09:01.678 busy:2203885166 (cyc) 00:09:01.678 total_run_count: 13992000 00:09:01.678 tsc_hz: 2200000000 (cyc) 00:09:01.678 ====================================== 00:09:01.678 poller_cost: 157 (cyc), 71 (nsec) 00:09:01.678 ************************************ 00:09:01.678 END TEST thread_poller_perf 00:09:01.678 ************************************ 00:09:01.678 00:09:01.678 real 0m1.742s 00:09:01.678 user 0m1.539s 00:09:01.678 sys 0m0.101s 00:09:01.678 23:24:24 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:01.678 23:24:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:01.678 23:24:24 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:01.678 23:24:24 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:01.678 23:24:24 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:01.678 23:24:24 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:01.678 23:24:24 thread -- common/autotest_common.sh@10 -- # set +x 00:09:01.678 ************************************ 00:09:01.678 START TEST thread_spdk_lock 00:09:01.678 ************************************ 00:09:01.678 23:24:24 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:01.678 [2024-05-14 23:24:24.939511] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:01.678 [2024-05-14 23:24:24.939683] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49145 ] 00:09:01.937 [2024-05-14 23:24:25.088979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.196 [2024-05-14 23:24:25.302323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.196 [2024-05-14 23:24:25.302331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.762 [2024-05-14 23:24:25.806755] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:02.762 [2024-05-14 23:24:25.806866] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:02.762 [2024-05-14 23:24:25.806905] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0xc3b0c0 00:09:02.762 [2024-05-14 23:24:25.814742] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:02.762 [2024-05-14 23:24:25.814844] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:02.762 [2024-05-14 23:24:25.814878] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:03.020 Starting test contend 00:09:03.020 Worker Delay Wait us Hold us Total us 00:09:03.020 0 3 188402 187302 375704 00:09:03.020 1 5 101543 291615 393158 00:09:03.020 PASS test contend 00:09:03.020 Starting test hold_by_poller 00:09:03.020 PASS test hold_by_poller 00:09:03.020 Starting test hold_by_message 00:09:03.020 PASS test hold_by_message 00:09:03.020 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:03.020 100014 assertions passed 00:09:03.020 0 assertions failed 00:09:03.020 ************************************ 00:09:03.020 END TEST thread_spdk_lock 00:09:03.020 ************************************ 00:09:03.020 00:09:03.020 real 0m1.243s 00:09:03.020 user 0m1.542s 00:09:03.020 sys 0m0.112s 00:09:03.020 23:24:26 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:03.020 23:24:26 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:09:03.020 ************************************ 00:09:03.020 END TEST thread 00:09:03.020 ************************************ 00:09:03.020 00:09:03.020 real 0m4.993s 00:09:03.020 user 0m4.729s 00:09:03.020 sys 0m0.451s 00:09:03.020 23:24:26 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:03.020 23:24:26 thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.020 23:24:26 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:03.020 23:24:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:03.020 23:24:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:03.020 23:24:26 -- common/autotest_common.sh@10 -- # set +x 00:09:03.020 ************************************ 00:09:03.020 START TEST accel 00:09:03.020 ************************************ 00:09:03.020 23:24:26 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:03.020 * Looking for test storage... 00:09:03.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:03.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.020 23:24:26 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:03.020 23:24:26 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:03.020 23:24:26 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:03.020 23:24:26 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=49242 00:09:03.020 23:24:26 accel -- accel/accel.sh@63 -- # waitforlisten 49242 00:09:03.021 23:24:26 accel -- common/autotest_common.sh@827 -- # '[' -z 49242 ']' 00:09:03.021 23:24:26 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.021 23:24:26 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:03.021 23:24:26 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.021 23:24:26 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:03.021 23:24:26 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.021 23:24:26 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:03.021 23:24:26 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:03.021 23:24:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.021 23:24:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.021 23:24:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.279 23:24:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.279 23:24:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.279 23:24:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:03.279 23:24:26 accel -- accel/accel.sh@41 -- # jq -r . 00:09:03.279 [2024-05-14 23:24:26.451607] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:03.279 [2024-05-14 23:24:26.451802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49242 ] 00:09:03.537 [2024-05-14 23:24:26.618279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.795 [2024-05-14 23:24:26.829507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.360 23:24:27 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:04.360 23:24:27 accel -- common/autotest_common.sh@860 -- # return 0 00:09:04.360 23:24:27 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:04.360 23:24:27 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:04.360 23:24:27 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:04.360 23:24:27 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:04.360 23:24:27 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:04.360 23:24:27 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:04.360 23:24:27 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:04.360 23:24:27 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.360 23:24:27 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.360 23:24:27 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # IFS== 00:09:04.619 23:24:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:04.619 23:24:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:04.619 23:24:27 accel -- accel/accel.sh@75 -- # killprocess 49242 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@946 -- # '[' -z 49242 ']' 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@950 -- # kill -0 49242 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@951 -- # uname 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 49242 00:09:04.619 killing process with pid 49242 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49242' 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@965 -- # kill 49242 00:09:04.619 23:24:27 accel -- common/autotest_common.sh@970 -- # wait 49242 00:09:07.161 23:24:29 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:07.161 23:24:29 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:07.161 23:24:29 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:07.161 23:24:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.161 23:24:29 accel -- common/autotest_common.sh@10 -- # set +x 00:09:07.161 23:24:29 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:07.161 23:24:29 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:07.161 23:24:30 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:07.161 23:24:30 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:07.161 23:24:30 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:07.161 23:24:30 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:07.161 23:24:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.161 23:24:30 accel -- common/autotest_common.sh@10 -- # set +x 00:09:07.161 ************************************ 00:09:07.161 START TEST accel_missing_filename 00:09:07.161 ************************************ 00:09:07.161 23:24:30 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:09:07.161 23:24:30 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:09:07.161 23:24:30 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:07.161 23:24:30 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:07.161 23:24:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.161 23:24:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:07.162 23:24:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.162 23:24:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:07.162 23:24:30 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:07.162 [2024-05-14 23:24:30.230450] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:07.162 [2024-05-14 23:24:30.230671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49336 ] 00:09:07.162 [2024-05-14 23:24:30.382902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.420 [2024-05-14 23:24:30.595754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.678 [2024-05-14 23:24:30.791799] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:08.245 [2024-05-14 23:24:31.258774] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:09:08.503 A filename is required. 00:09:08.503 ************************************ 00:09:08.503 END TEST accel_missing_filename 00:09:08.503 ************************************ 00:09:08.503 23:24:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:09:08.503 23:24:31 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.503 23:24:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:09:08.503 23:24:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:09:08.503 23:24:31 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:09:08.503 23:24:31 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.503 00:09:08.503 real 0m1.539s 00:09:08.503 user 0m1.223s 00:09:08.503 sys 0m0.175s 00:09:08.503 23:24:31 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.503 23:24:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:08.503 23:24:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.503 23:24:31 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:08.503 23:24:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.503 23:24:31 accel -- common/autotest_common.sh@10 -- # set +x 00:09:08.503 ************************************ 00:09:08.503 START TEST accel_compress_verify 00:09:08.503 ************************************ 00:09:08.503 23:24:31 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.503 23:24:31 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:09:08.503 23:24:31 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.503 23:24:31 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:08.503 23:24:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.503 23:24:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:08.503 23:24:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.503 23:24:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:08.503 23:24:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:08.761 [2024-05-14 23:24:31.814098] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:08.761 [2024-05-14 23:24:31.814349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49381 ] 00:09:08.761 [2024-05-14 23:24:31.964879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.019 [2024-05-14 23:24:32.177475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.277 [2024-05-14 23:24:32.379381] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.841 [2024-05-14 23:24:32.855542] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:09:10.099 00:09:10.099 Compression does not support the verify option, aborting. 00:09:10.099 ************************************ 00:09:10.099 END TEST accel_compress_verify 00:09:10.099 ************************************ 00:09:10.099 23:24:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:09:10.099 23:24:33 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.099 23:24:33 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:09:10.099 23:24:33 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:09:10.099 23:24:33 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:09:10.099 23:24:33 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.099 00:09:10.099 real 0m1.537s 00:09:10.099 user 0m1.223s 00:09:10.099 sys 0m0.175s 00:09:10.099 23:24:33 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.099 23:24:33 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:09:10.099 23:24:33 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:10.099 23:24:33 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:10.099 23:24:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.099 23:24:33 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.099 ************************************ 00:09:10.099 START TEST accel_wrong_workload 00:09:10.099 ************************************ 00:09:10.099 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:09:10.099 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:09:10.099 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:10.099 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:10.099 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.099 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:10.099 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.099 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:09:10.099 23:24:33 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:09:10.447 Unsupported workload type: foobar 00:09:10.447 [2024-05-14 23:24:33.389928] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:10.447 accel_perf options: 00:09:10.447 [-h help message] 00:09:10.447 [-q queue depth per core] 00:09:10.447 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:10.447 [-T number of threads per core 00:09:10.447 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:10.447 [-t time in seconds] 00:09:10.447 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:10.447 [ dif_verify, , dif_generate, dif_generate_copy 00:09:10.447 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:10.447 [-l for compress/decompress workloads, name of uncompressed input file 00:09:10.447 [-S for crc32c workload, use this seed value (default 0) 00:09:10.447 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:10.447 [-f for fill workload, use this BYTE value (default 255) 00:09:10.447 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:10.447 [-y verify result if this switch is on] 00:09:10.447 [-a tasks to allocate per core (default: same value as -q)] 00:09:10.447 Can be used to spread operations across a wider range of memory. 00:09:10.447 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:09:10.447 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.447 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.447 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.447 00:09:10.447 real 0m0.150s 00:09:10.447 user 0m0.089s 00:09:10.447 sys 0m0.030s 00:09:10.447 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.447 23:24:33 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:09:10.447 ************************************ 00:09:10.447 END TEST accel_wrong_workload 00:09:10.447 ************************************ 00:09:10.447 23:24:33 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:10.447 23:24:33 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:10.447 23:24:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.447 23:24:33 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.447 ************************************ 00:09:10.447 START TEST accel_negative_buffers 00:09:10.447 ************************************ 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:09:10.447 23:24:33 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:09:10.447 -x option must be non-negative. 00:09:10.447 [2024-05-14 23:24:33.582033] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:10.447 accel_perf options: 00:09:10.447 [-h help message] 00:09:10.447 [-q queue depth per core] 00:09:10.447 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:10.447 [-T number of threads per core 00:09:10.447 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:10.447 [-t time in seconds] 00:09:10.447 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:10.447 [ dif_verify, , dif_generate, dif_generate_copy 00:09:10.447 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:10.447 [-l for compress/decompress workloads, name of uncompressed input file 00:09:10.447 [-S for crc32c workload, use this seed value (default 0) 00:09:10.447 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:10.447 [-f for fill workload, use this BYTE value (default 255) 00:09:10.447 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:10.447 [-y verify result if this switch is on] 00:09:10.447 [-a tasks to allocate per core (default: same value as -q)] 00:09:10.447 Can be used to spread operations across a wider range of memory. 00:09:10.447 ************************************ 00:09:10.447 END TEST accel_negative_buffers 00:09:10.447 ************************************ 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.447 00:09:10.447 real 0m0.151s 00:09:10.447 user 0m0.083s 00:09:10.447 sys 0m0.032s 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.447 23:24:33 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:09:10.447 23:24:33 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:10.447 23:24:33 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:10.447 23:24:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.447 23:24:33 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.447 ************************************ 00:09:10.447 START TEST accel_crc32c 00:09:10.447 ************************************ 00:09:10.448 23:24:33 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:10.448 23:24:33 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:10.705 [2024-05-14 23:24:33.795733] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:10.705 [2024-05-14 23:24:33.795956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49480 ] 00:09:10.705 [2024-05-14 23:24:33.968171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.963 [2024-05-14 23:24:34.182120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.222 23:24:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:13.123 23:24:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.123 00:09:13.123 real 0m2.588s 00:09:13.123 user 0m2.233s 00:09:13.123 sys 0m0.184s 00:09:13.123 ************************************ 00:09:13.123 END TEST accel_crc32c 00:09:13.123 ************************************ 00:09:13.123 23:24:36 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:13.123 23:24:36 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:13.123 23:24:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:13.123 23:24:36 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:13.123 23:24:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:13.123 23:24:36 accel -- common/autotest_common.sh@10 -- # set +x 00:09:13.123 ************************************ 00:09:13.123 START TEST accel_crc32c_C2 00:09:13.123 ************************************ 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:13.123 23:24:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:13.123 [2024-05-14 23:24:36.405412] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:13.123 [2024-05-14 23:24:36.405614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49531 ] 00:09:13.381 [2024-05-14 23:24:36.557197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.638 [2024-05-14 23:24:36.806037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:13.896 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.897 23:24:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:15.798 ************************************ 00:09:15.798 END TEST accel_crc32c_C2 00:09:15.798 ************************************ 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:15.798 00:09:15.798 real 0m2.581s 00:09:15.798 user 0m2.261s 00:09:15.798 sys 0m0.175s 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.798 23:24:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 23:24:38 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:15.798 23:24:38 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:15.798 23:24:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.798 23:24:38 accel -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 ************************************ 00:09:15.798 START TEST accel_copy 00:09:15.798 ************************************ 00:09:15.798 23:24:38 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:15.798 23:24:38 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:09:15.798 [2024-05-14 23:24:39.018056] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:15.798 [2024-05-14 23:24:39.018517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49600 ] 00:09:16.056 [2024-05-14 23:24:39.171749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.315 [2024-05-14 23:24:39.385326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:16.315 23:24:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:18.215 ************************************ 00:09:18.215 END TEST accel_copy 00:09:18.215 ************************************ 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:09:18.215 23:24:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:18.215 00:09:18.215 real 0m2.533s 00:09:18.215 user 0m2.223s 00:09:18.215 sys 0m0.168s 00:09:18.215 23:24:41 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.215 23:24:41 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:09:18.215 23:24:41 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:18.215 23:24:41 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:18.215 23:24:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.215 23:24:41 accel -- common/autotest_common.sh@10 -- # set +x 00:09:18.215 ************************************ 00:09:18.215 START TEST accel_fill 00:09:18.215 ************************************ 00:09:18.215 23:24:41 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:09:18.215 23:24:41 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:09:18.473 [2024-05-14 23:24:41.595701] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:18.473 [2024-05-14 23:24:41.595922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49655 ] 00:09:18.473 [2024-05-14 23:24:41.757910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.765 [2024-05-14 23:24:41.968900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:19.045 23:24:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:20.945 23:24:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:20.945 ************************************ 00:09:20.945 END TEST accel_fill 00:09:20.945 ************************************ 00:09:20.945 23:24:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:20.945 23:24:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:20.945 23:24:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:20.945 00:09:20.945 real 0m2.547s 00:09:20.945 user 0m2.266s 00:09:20.945 sys 0m0.161s 00:09:20.945 23:24:44 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.945 23:24:44 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:20.945 23:24:44 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:20.945 23:24:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:20.945 23:24:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.945 23:24:44 accel -- common/autotest_common.sh@10 -- # set +x 00:09:20.945 ************************************ 00:09:20.945 START TEST accel_copy_crc32c 00:09:20.945 ************************************ 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:20.945 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:20.945 [2024-05-14 23:24:44.181281] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:20.945 [2024-05-14 23:24:44.181488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49708 ] 00:09:21.203 [2024-05-14 23:24:44.339275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.461 [2024-05-14 23:24:44.575695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.719 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:21.720 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.720 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.720 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:21.720 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:21.720 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:21.720 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:21.720 23:24:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:23.634 ************************************ 00:09:23.634 END TEST accel_copy_crc32c 00:09:23.634 ************************************ 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:23.634 00:09:23.634 real 0m2.571s 00:09:23.634 user 0m2.242s 00:09:23.634 sys 0m0.185s 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:23.634 23:24:46 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:23.634 23:24:46 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:23.634 23:24:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:23.634 23:24:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:23.634 23:24:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:23.634 ************************************ 00:09:23.634 START TEST accel_copy_crc32c_C2 00:09:23.634 ************************************ 00:09:23.634 23:24:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:23.634 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:23.634 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:23.634 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:23.634 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:23.634 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:23.634 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:23.634 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:23.635 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:23.635 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:23.635 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.635 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.635 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:23.635 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:23.635 23:24:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:23.635 [2024-05-14 23:24:46.791690] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:23.635 [2024-05-14 23:24:46.791872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49772 ] 00:09:23.892 [2024-05-14 23:24:46.939484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.892 [2024-05-14 23:24:47.150631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:24.150 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:24.151 23:24:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.049 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.049 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.049 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:26.050 ************************************ 00:09:26.050 END TEST accel_copy_crc32c_C2 00:09:26.050 ************************************ 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:26.050 00:09:26.050 real 0m2.567s 00:09:26.050 user 0m2.242s 00:09:26.050 sys 0m0.169s 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:26.050 23:24:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:26.050 23:24:49 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:26.050 23:24:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:26.050 23:24:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:26.050 23:24:49 accel -- common/autotest_common.sh@10 -- # set +x 00:09:26.050 ************************************ 00:09:26.050 START TEST accel_dualcast 00:09:26.050 ************************************ 00:09:26.050 23:24:49 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:26.050 23:24:49 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:26.308 [2024-05-14 23:24:49.401768] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:26.308 [2024-05-14 23:24:49.401948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49826 ] 00:09:26.308 [2024-05-14 23:24:49.551029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.567 [2024-05-14 23:24:49.761328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.825 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:26.826 23:24:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:28.726 23:24:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:28.727 ************************************ 00:09:28.727 END TEST accel_dualcast 00:09:28.727 ************************************ 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:28.727 23:24:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:28.727 00:09:28.727 real 0m2.535s 00:09:28.727 user 0m2.227s 00:09:28.727 sys 0m0.171s 00:09:28.727 23:24:51 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:28.727 23:24:51 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:28.727 23:24:51 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:28.727 23:24:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:28.727 23:24:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:28.727 23:24:51 accel -- common/autotest_common.sh@10 -- # set +x 00:09:28.727 ************************************ 00:09:28.727 START TEST accel_compare 00:09:28.727 ************************************ 00:09:28.727 23:24:51 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:28.727 23:24:51 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:28.727 [2024-05-14 23:24:51.978038] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:28.727 [2024-05-14 23:24:51.978517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49884 ] 00:09:28.985 [2024-05-14 23:24:52.144330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.243 [2024-05-14 23:24:52.391242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:29.501 23:24:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:31.401 ************************************ 00:09:31.401 END TEST accel_compare 00:09:31.401 ************************************ 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:31.401 23:24:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:31.401 00:09:31.401 real 0m2.588s 00:09:31.401 user 0m2.274s 00:09:31.401 sys 0m0.177s 00:09:31.401 23:24:54 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:31.401 23:24:54 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:31.401 23:24:54 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:31.401 23:24:54 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:31.401 23:24:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:31.401 23:24:54 accel -- common/autotest_common.sh@10 -- # set +x 00:09:31.401 ************************************ 00:09:31.401 START TEST accel_xor 00:09:31.401 ************************************ 00:09:31.401 23:24:54 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:31.401 23:24:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:31.402 23:24:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:31.402 23:24:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:31.402 [2024-05-14 23:24:54.600038] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:31.402 [2024-05-14 23:24:54.600270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49940 ] 00:09:31.660 [2024-05-14 23:24:54.749534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.921 [2024-05-14 23:24:54.962463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:31.921 23:24:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:33.825 23:24:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:33.825 00:09:33.825 real 0m2.529s 00:09:33.825 user 0m2.220s 00:09:33.825 sys 0m0.172s 00:09:33.825 23:24:56 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.825 23:24:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:33.825 ************************************ 00:09:33.825 END TEST accel_xor 00:09:33.825 ************************************ 00:09:33.825 23:24:57 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:33.825 23:24:57 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:33.825 23:24:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.825 23:24:57 accel -- common/autotest_common.sh@10 -- # set +x 00:09:33.825 ************************************ 00:09:33.825 START TEST accel_xor 00:09:33.825 ************************************ 00:09:33.825 23:24:57 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:33.825 23:24:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:34.083 [2024-05-14 23:24:57.182914] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:34.083 [2024-05-14 23:24:57.183130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49998 ] 00:09:34.083 [2024-05-14 23:24:57.350267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.341 [2024-05-14 23:24:57.595428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:34.599 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:34.600 23:24:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:36.499 23:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:36.500 ************************************ 00:09:36.500 END TEST accel_xor 00:09:36.500 ************************************ 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:36.500 23:24:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:36.500 00:09:36.500 real 0m2.587s 00:09:36.500 user 0m2.273s 00:09:36.500 sys 0m0.177s 00:09:36.500 23:24:59 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.500 23:24:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:36.500 23:24:59 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:36.500 23:24:59 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:36.500 23:24:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.500 23:24:59 accel -- common/autotest_common.sh@10 -- # set +x 00:09:36.500 ************************************ 00:09:36.500 START TEST accel_dif_verify 00:09:36.500 ************************************ 00:09:36.500 23:24:59 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:36.500 23:24:59 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:36.758 [2024-05-14 23:24:59.812703] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:36.758 [2024-05-14 23:24:59.812875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50051 ] 00:09:36.758 [2024-05-14 23:24:59.974741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.016 [2024-05-14 23:25:00.189276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:37.275 23:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:39.174 23:25:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:39.174 ************************************ 00:09:39.174 END TEST accel_dif_verify 00:09:39.174 ************************************ 00:09:39.175 23:25:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:39.175 23:25:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:39.175 23:25:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:39.175 00:09:39.175 real 0m2.552s 00:09:39.175 user 0m2.233s 00:09:39.175 sys 0m0.177s 00:09:39.175 23:25:02 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:39.175 23:25:02 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:39.175 23:25:02 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:39.175 23:25:02 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:39.175 23:25:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:39.175 23:25:02 accel -- common/autotest_common.sh@10 -- # set +x 00:09:39.175 ************************************ 00:09:39.175 START TEST accel_dif_generate 00:09:39.175 ************************************ 00:09:39.175 23:25:02 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:39.175 23:25:02 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:39.175 [2024-05-14 23:25:02.414296] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:39.175 [2024-05-14 23:25:02.414490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50114 ] 00:09:39.433 [2024-05-14 23:25:02.578975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.691 [2024-05-14 23:25:02.792424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.949 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.950 23:25:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.849 ************************************ 00:09:41.849 END TEST accel_dif_generate 00:09:41.849 ************************************ 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:41.849 23:25:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:41.849 00:09:41.849 real 0m2.558s 00:09:41.849 user 0m2.224s 00:09:41.849 sys 0m0.183s 00:09:41.849 23:25:04 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.849 23:25:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:41.849 23:25:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:41.849 23:25:04 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:41.849 23:25:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.849 23:25:04 accel -- common/autotest_common.sh@10 -- # set +x 00:09:41.849 ************************************ 00:09:41.849 START TEST accel_dif_generate_copy 00:09:41.849 ************************************ 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:41.849 23:25:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:41.849 [2024-05-14 23:25:05.013181] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:41.849 [2024-05-14 23:25:05.013371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50165 ] 00:09:42.106 [2024-05-14 23:25:05.168813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.106 [2024-05-14 23:25:05.391114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:42.362 23:25:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.263 ************************************ 00:09:44.263 END TEST accel_dif_generate_copy 00:09:44.263 ************************************ 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:44.263 00:09:44.263 real 0m2.559s 00:09:44.263 user 0m2.228s 00:09:44.263 sys 0m0.191s 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:44.263 23:25:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:44.263 23:25:07 accel -- accel/accel.sh@115 -- # [[ n == y ]] 00:09:44.263 23:25:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:44.263 23:25:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:44.263 23:25:07 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:44.263 23:25:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:44.263 23:25:07 accel -- common/autotest_common.sh@10 -- # set +x 00:09:44.263 23:25:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:44.263 23:25:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:44.263 23:25:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:44.263 23:25:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.263 23:25:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.263 23:25:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:44.263 23:25:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:44.263 23:25:07 accel -- accel/accel.sh@41 -- # jq -r . 00:09:44.263 ************************************ 00:09:44.263 START TEST accel_dif_functional_tests 00:09:44.263 ************************************ 00:09:44.263 23:25:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:44.522 [2024-05-14 23:25:07.614330] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:44.522 [2024-05-14 23:25:07.614493] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50223 ] 00:09:44.522 [2024-05-14 23:25:07.765926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.780 [2024-05-14 23:25:07.983470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.780 [2024-05-14 23:25:07.983611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.780 [2024-05-14 23:25:07.983615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.347 00:09:45.347 00:09:45.347 CUnit - A unit testing framework for C - Version 2.1-3 00:09:45.347 http://cunit.sourceforge.net/ 00:09:45.347 00:09:45.347 00:09:45.347 Suite: accel_dif 00:09:45.347 Test: verify: DIF generated, GUARD check ...passed 00:09:45.347 Test: verify: DIF generated, APPTAG check ...passed 00:09:45.347 Test: verify: DIF generated, REFTAG check ...passed 00:09:45.347 Test: verify: DIF not generated, GUARD check ...passed 00:09:45.347 Test: verify: DIF not generated, APPTAG check ...passed 00:09:45.347 Test: verify: DIF not generated, REFTAG check ...passed 00:09:45.347 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:45.347 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:09:45.347 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:45.347 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-05-14 23:25:08.351095] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:45.347 [2024-05-14 23:25:08.351366] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:45.347 [2024-05-14 23:25:08.351476] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:45.347 [2024-05-14 23:25:08.351575] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:45.347 [2024-05-14 23:25:08.351656] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:45.347 [2024-05-14 23:25:08.351777] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:45.347 [2024-05-14 23:25:08.351960] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:45.347 passed 00:09:45.347 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:45.347 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-14 23:25:08.352901] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:45.347 passed 00:09:45.347 Test: generate copy: DIF generated, GUARD check ...passed 00:09:45.347 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:45.347 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:45.347 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:45.347 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:45.347 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:45.347 Test: generate copy: iovecs-len validate ...[2024-05-14 23:25:08.355228] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:45.347 passed 00:09:45.347 Test: generate copy: buffer alignment validate ...passed 00:09:45.347 00:09:45.347 Run Summary: Type Total Ran Passed Failed Inactive 00:09:45.347 suites 1 1 n/a 0 0 00:09:45.347 tests 20 20 20 0 0 00:09:45.347 asserts 204 204 204 0 n/a 00:09:45.347 00:09:45.347 Elapsed time = 0.020 seconds 00:09:46.721 ************************************ 00:09:46.721 END TEST accel_dif_functional_tests 00:09:46.721 ************************************ 00:09:46.721 00:09:46.721 real 0m2.114s 00:09:46.721 user 0m4.204s 00:09:46.721 sys 0m0.235s 00:09:46.722 23:25:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:46.722 23:25:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:46.722 00:09:46.722 real 0m43.393s 00:09:46.722 user 0m39.784s 00:09:46.722 sys 0m3.920s 00:09:46.722 ************************************ 00:09:46.722 END TEST accel 00:09:46.722 ************************************ 00:09:46.722 23:25:09 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:46.722 23:25:09 accel -- common/autotest_common.sh@10 -- # set +x 00:09:46.722 23:25:09 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:46.722 23:25:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:46.722 23:25:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:46.722 23:25:09 -- common/autotest_common.sh@10 -- # set +x 00:09:46.722 ************************************ 00:09:46.722 START TEST accel_rpc 00:09:46.722 ************************************ 00:09:46.722 23:25:09 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:46.722 * Looking for test storage... 00:09:46.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:46.722 23:25:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:46.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.722 23:25:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=50328 00:09:46.722 23:25:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 50328 00:09:46.722 23:25:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:46.722 23:25:09 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 50328 ']' 00:09:46.722 23:25:09 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.722 23:25:09 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:46.722 23:25:09 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.722 23:25:09 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:46.722 23:25:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.722 [2024-05-14 23:25:09.866420] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:46.722 [2024-05-14 23:25:09.866612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50328 ] 00:09:46.979 [2024-05-14 23:25:10.032624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.979 [2024-05-14 23:25:10.240257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.544 23:25:10 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:47.544 23:25:10 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:47.544 23:25:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:47.544 23:25:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:47.544 23:25:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:47.544 23:25:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:47.544 23:25:10 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:47.544 23:25:10 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:47.544 23:25:10 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:47.544 23:25:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.544 ************************************ 00:09:47.544 START TEST accel_assign_opcode 00:09:47.544 ************************************ 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:47.544 [2024-05-14 23:25:10.621056] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:47.544 [2024-05-14 23:25:10.637044] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.544 23:25:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.479 software 00:09:48.479 ************************************ 00:09:48.479 END TEST accel_assign_opcode 00:09:48.479 ************************************ 00:09:48.479 00:09:48.479 real 0m0.861s 00:09:48.479 user 0m0.050s 00:09:48.479 sys 0m0.007s 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:48.479 23:25:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:48.479 23:25:11 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 50328 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 50328 ']' 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 50328 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50328 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:48.479 killing process with pid 50328 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50328' 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@965 -- # kill 50328 00:09:48.479 23:25:11 accel_rpc -- common/autotest_common.sh@970 -- # wait 50328 00:09:51.056 ************************************ 00:09:51.056 END TEST accel_rpc 00:09:51.056 ************************************ 00:09:51.056 00:09:51.056 real 0m4.081s 00:09:51.056 user 0m3.905s 00:09:51.056 sys 0m0.473s 00:09:51.056 23:25:13 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:51.056 23:25:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.056 23:25:13 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:51.056 23:25:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:51.056 23:25:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:51.056 23:25:13 -- common/autotest_common.sh@10 -- # set +x 00:09:51.056 ************************************ 00:09:51.056 START TEST app_cmdline 00:09:51.056 ************************************ 00:09:51.056 23:25:13 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:51.056 * Looking for test storage... 00:09:51.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:51.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.056 23:25:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:51.056 23:25:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=50472 00:09:51.056 23:25:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 50472 00:09:51.056 23:25:13 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 50472 ']' 00:09:51.056 23:25:13 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.056 23:25:13 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:51.056 23:25:13 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.056 23:25:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:51.056 23:25:13 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:51.056 23:25:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:51.056 [2024-05-14 23:25:13.985228] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:51.056 [2024-05-14 23:25:13.985409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50472 ] 00:09:51.056 [2024-05-14 23:25:14.159683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.314 [2024-05-14 23:25:14.398497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.249 23:25:15 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:52.249 23:25:15 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:09:52.249 23:25:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:52.249 { 00:09:52.249 "version": "SPDK v24.05-pre git sha1 e8841656d", 00:09:52.249 "fields": { 00:09:52.249 "major": 24, 00:09:52.249 "minor": 5, 00:09:52.249 "patch": 0, 00:09:52.249 "suffix": "-pre", 00:09:52.249 "commit": "e8841656d" 00:09:52.249 } 00:09:52.249 } 00:09:52.249 23:25:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:52.249 23:25:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:52.249 23:25:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:52.249 23:25:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:52.249 23:25:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:52.249 23:25:15 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.249 23:25:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:52.249 23:25:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:52.249 23:25:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:52.249 23:25:15 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.506 23:25:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:52.506 23:25:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:52.506 23:25:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:52.506 request: 00:09:52.506 { 00:09:52.506 "method": "env_dpdk_get_mem_stats", 00:09:52.506 "req_id": 1 00:09:52.506 } 00:09:52.506 Got JSON-RPC error response 00:09:52.506 response: 00:09:52.506 { 00:09:52.506 "code": -32601, 00:09:52.506 "message": "Method not found" 00:09:52.506 } 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:52.506 23:25:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 50472 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 50472 ']' 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 50472 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:52.506 23:25:15 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50472 00:09:52.763 killing process with pid 50472 00:09:52.763 23:25:15 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:52.763 23:25:15 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:52.763 23:25:15 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50472' 00:09:52.763 23:25:15 app_cmdline -- common/autotest_common.sh@965 -- # kill 50472 00:09:52.763 23:25:15 app_cmdline -- common/autotest_common.sh@970 -- # wait 50472 00:09:55.295 ************************************ 00:09:55.295 END TEST app_cmdline 00:09:55.295 ************************************ 00:09:55.295 00:09:55.295 real 0m4.206s 00:09:55.295 user 0m4.433s 00:09:55.295 sys 0m0.504s 00:09:55.295 23:25:17 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.295 23:25:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:55.295 23:25:18 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:55.295 23:25:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:55.295 23:25:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.295 23:25:18 -- common/autotest_common.sh@10 -- # set +x 00:09:55.295 ************************************ 00:09:55.295 START TEST version 00:09:55.295 ************************************ 00:09:55.295 23:25:18 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:55.295 * Looking for test storage... 00:09:55.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:55.295 23:25:18 version -- app/version.sh@17 -- # get_header_version major 00:09:55.295 23:25:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:55.295 23:25:18 version -- app/version.sh@14 -- # tr -d '"' 00:09:55.295 23:25:18 version -- app/version.sh@14 -- # cut -f2 00:09:55.295 23:25:18 version -- app/version.sh@17 -- # major=24 00:09:55.295 23:25:18 version -- app/version.sh@18 -- # get_header_version minor 00:09:55.295 23:25:18 version -- app/version.sh@14 -- # cut -f2 00:09:55.295 23:25:18 version -- app/version.sh@14 -- # tr -d '"' 00:09:55.295 23:25:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:55.295 23:25:18 version -- app/version.sh@18 -- # minor=5 00:09:55.295 23:25:18 version -- app/version.sh@19 -- # get_header_version patch 00:09:55.295 23:25:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:55.295 23:25:18 version -- app/version.sh@14 -- # cut -f2 00:09:55.295 23:25:18 version -- app/version.sh@14 -- # tr -d '"' 00:09:55.295 23:25:18 version -- app/version.sh@19 -- # patch=0 00:09:55.295 23:25:18 version -- app/version.sh@20 -- # get_header_version suffix 00:09:55.295 23:25:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:55.295 23:25:18 version -- app/version.sh@14 -- # cut -f2 00:09:55.295 23:25:18 version -- app/version.sh@14 -- # tr -d '"' 00:09:55.295 23:25:18 version -- app/version.sh@20 -- # suffix=-pre 00:09:55.295 23:25:18 version -- app/version.sh@22 -- # version=24.5 00:09:55.295 23:25:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:55.295 23:25:18 version -- app/version.sh@28 -- # version=24.5rc0 00:09:55.295 23:25:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:55.295 23:25:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:55.295 23:25:18 version -- app/version.sh@30 -- # py_version=24.5rc0 00:09:55.295 23:25:18 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:09:55.295 00:09:55.295 real 0m0.124s 00:09:55.295 user 0m0.073s 00:09:55.295 sys 0m0.079s 00:09:55.295 23:25:18 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.295 ************************************ 00:09:55.295 END TEST version 00:09:55.295 ************************************ 00:09:55.295 23:25:18 version -- common/autotest_common.sh@10 -- # set +x 00:09:55.295 23:25:18 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:09:55.295 23:25:18 -- spdk/autotest.sh@185 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:09:55.295 23:25:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:55.295 23:25:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.295 23:25:18 -- common/autotest_common.sh@10 -- # set +x 00:09:55.295 ************************************ 00:09:55.295 START TEST blockdev_general 00:09:55.295 ************************************ 00:09:55.295 23:25:18 blockdev_general -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:09:55.295 * Looking for test storage... 00:09:55.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:55.295 23:25:18 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:09:55.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=50682 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 50682 00:09:55.295 23:25:18 blockdev_general -- common/autotest_common.sh@827 -- # '[' -z 50682 ']' 00:09:55.295 23:25:18 blockdev_general -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.295 23:25:18 blockdev_general -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:55.295 23:25:18 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:09:55.295 23:25:18 blockdev_general -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.295 23:25:18 blockdev_general -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:55.295 23:25:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:55.295 [2024-05-14 23:25:18.412564] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:09:55.295 [2024-05-14 23:25:18.412783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50682 ] 00:09:55.295 [2024-05-14 23:25:18.572537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.554 [2024-05-14 23:25:18.817535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.121 23:25:19 blockdev_general -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:56.121 23:25:19 blockdev_general -- common/autotest_common.sh@860 -- # return 0 00:09:56.121 23:25:19 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:56.121 23:25:19 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:09:56.121 23:25:19 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:09:56.121 23:25:19 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.121 23:25:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:57.068 [2024-05-14 23:25:20.033568] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:57.068 [2024-05-14 23:25:20.033669] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:57.068 00:09:57.068 [2024-05-14 23:25:20.041512] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:57.068 [2024-05-14 23:25:20.041581] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:57.068 00:09:57.068 Malloc0 00:09:57.068 Malloc1 00:09:57.068 Malloc2 00:09:57.068 Malloc3 00:09:57.068 Malloc4 00:09:57.068 Malloc5 00:09:57.068 Malloc6 00:09:57.326 Malloc7 00:09:57.326 Malloc8 00:09:57.326 Malloc9 00:09:57.326 [2024-05-14 23:25:20.445099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:57.326 [2024-05-14 23:25:20.445337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.326 [2024-05-14 23:25:20.445405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:09:57.326 [2024-05-14 23:25:20.445435] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.326 [2024-05-14 23:25:20.447106] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.326 [2024-05-14 23:25:20.447177] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:57.326 TestPT 00:09:57.326 23:25:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.326 23:25:20 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:09:57.326 5000+0 records in 00:09:57.326 5000+0 records out 00:09:57.326 10240000 bytes (10 MB) copied, 0.0202094 s, 507 MB/s 00:09:57.326 23:25:20 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:09:57.326 23:25:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.326 23:25:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:57.326 AIO0 00:09:57.326 23:25:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.326 23:25:20 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:57.326 23:25:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.326 23:25:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:57.326 23:25:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.326 23:25:20 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:09:57.326 23:25:20 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:57.326 23:25:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.327 23:25:20 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.327 23:25:20 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.327 23:25:20 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:57.327 23:25:20 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:57.327 23:25:20 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.327 23:25:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:57.586 23:25:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.586 23:25:20 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:57.587 23:25:20 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:57.588 23:25:20 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f442ba9f-ccc2-4d92-8fa0-ac184693b38b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f442ba9f-ccc2-4d92-8fa0-ac184693b38b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d2fb27c9-99bf-5d6c-9960-94035c73ed9c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d2fb27c9-99bf-5d6c-9960-94035c73ed9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1cb5255c-b42b-5bef-a564-33828aece1a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1cb5255c-b42b-5bef-a564-33828aece1a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a12d902d-84e0-5865-9b16-3b269e19343d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a12d902d-84e0-5865-9b16-3b269e19343d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "6f6bc59b-8c96-54ba-90fb-c9e26db5d6e5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6f6bc59b-8c96-54ba-90fb-c9e26db5d6e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8f6f331e-9e3c-5729-9e83-986d2bf9dcec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8f6f331e-9e3c-5729-9e83-986d2bf9dcec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5f656d60-afd8-5fc4-8f8c-353caaeb5b2d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5f656d60-afd8-5fc4-8f8c-353caaeb5b2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8c779e5d-f475-5f72-a3bc-abc50a3f4996"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8c779e5d-f475-5f72-a3bc-abc50a3f4996",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "43a3a3e6-0868-56c7-83e0-e582e4ed2f66"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "43a3a3e6-0868-56c7-83e0-e582e4ed2f66",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7ce5efe0-ff79-5a39-bf01-53ee4d40a7e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ce5efe0-ff79-5a39-bf01-53ee4d40a7e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "4443b631-c2a0-5f5d-83c3-3735a82f6cd0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4443b631-c2a0-5f5d-83c3-3735a82f6cd0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f2ef3380-d530-596c-ad06-96e22edc04da"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f2ef3380-d530-596c-ad06-96e22edc04da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5f27ad5a-8e1c-4a19-995f-56503e585de6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5f27ad5a-8e1c-4a19-995f-56503e585de6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5f27ad5a-8e1c-4a19-995f-56503e585de6",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c212db74-15ac-45a5-bbde-6cc22ee5aaa3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f0ca39b8-4630-4c2d-9bc8-11992242e63f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "dca1f9e6-ca25-4d9e-98de-30a53b88c0be"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dca1f9e6-ca25-4d9e-98de-30a53b88c0be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "dca1f9e6-ca25-4d9e-98de-30a53b88c0be",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7bf915e1-5c5e-491e-9394-6b365d084a35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a1304eaa-01e2-4a97-8aa6-f5da0f7f6f91",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d4682cb6-c0af-4980-8b8a-19fc36e7caec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "986b1f38-fd6c-4dc6-b215-f9adabb003d7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9f49edbd-9403-4e1b-8035-04a27cfa29f7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9f49edbd-9403-4e1b-8035-04a27cfa29f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:57.588 23:25:20 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:57.588 23:25:20 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:09:57.588 23:25:20 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:57.588 23:25:20 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 50682 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@946 -- # '[' -z 50682 ']' 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@950 -- # kill -0 50682 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@951 -- # uname 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50682 00:09:57.588 killing process with pid 50682 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50682' 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@965 -- # kill 50682 00:09:57.588 23:25:20 blockdev_general -- common/autotest_common.sh@970 -- # wait 50682 00:10:00.871 23:25:23 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:00.871 23:25:23 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:00.871 23:25:23 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:00.871 23:25:23 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:00.871 23:25:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:00.871 ************************************ 00:10:00.871 START TEST bdev_hello_world 00:10:00.871 ************************************ 00:10:00.871 23:25:23 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:00.871 [2024-05-14 23:25:23.961787] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:10:00.871 [2024-05-14 23:25:23.961976] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50776 ] 00:10:00.871 [2024-05-14 23:25:24.112339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.129 [2024-05-14 23:25:24.321876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.694 [2024-05-14 23:25:24.752380] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:01.695 [2024-05-14 23:25:24.752526] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:01.695 [2024-05-14 23:25:24.760315] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:01.695 [2024-05-14 23:25:24.760366] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:01.695 [2024-05-14 23:25:24.768355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:01.695 [2024-05-14 23:25:24.768406] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:01.695 [2024-05-14 23:25:24.768452] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:01.695 [2024-05-14 23:25:24.941242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:01.695 [2024-05-14 23:25:24.941340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.695 [2024-05-14 23:25:24.941381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002bb80 00:10:01.695 [2024-05-14 23:25:24.941410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.695 [2024-05-14 23:25:24.943306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.695 [2024-05-14 23:25:24.943354] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:01.952 [2024-05-14 23:25:25.220705] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:01.952 [2024-05-14 23:25:25.220773] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:10:01.952 [2024-05-14 23:25:25.220837] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:01.952 [2024-05-14 23:25:25.220888] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:01.952 [2024-05-14 23:25:25.220946] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:01.952 [2024-05-14 23:25:25.220972] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:01.952 [2024-05-14 23:25:25.221032] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:01.952 00:10:01.952 [2024-05-14 23:25:25.221068] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:04.490 ************************************ 00:10:04.490 END TEST bdev_hello_world 00:10:04.490 ************************************ 00:10:04.490 00:10:04.490 real 0m3.442s 00:10:04.490 user 0m2.861s 00:10:04.490 sys 0m0.369s 00:10:04.490 23:25:27 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:04.490 23:25:27 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:04.490 23:25:27 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:10:04.490 23:25:27 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:04.490 23:25:27 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:04.490 23:25:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:04.490 ************************************ 00:10:04.490 START TEST bdev_bounds 00:10:04.490 ************************************ 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:10:04.490 Process bdevio pid: 50845 00:10:04.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=50845 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 50845' 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 50845 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 50845 ']' 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:04.490 23:25:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:04.490 [2024-05-14 23:25:27.441668] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:10:04.490 [2024-05-14 23:25:27.441829] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50845 ] 00:10:04.490 [2024-05-14 23:25:27.605141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:04.748 [2024-05-14 23:25:27.812229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.748 [2024-05-14 23:25:27.812299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.748 [2024-05-14 23:25:27.812312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.007 [2024-05-14 23:25:28.243315] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:05.007 [2024-05-14 23:25:28.243449] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:05.007 [2024-05-14 23:25:28.251284] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:05.007 [2024-05-14 23:25:28.251346] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:05.007 [2024-05-14 23:25:28.259311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:05.007 [2024-05-14 23:25:28.259376] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:05.007 [2024-05-14 23:25:28.259404] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:05.265 [2024-05-14 23:25:28.434128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:05.265 [2024-05-14 23:25:28.434280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.265 [2024-05-14 23:25:28.434375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002c180 00:10:05.265 [2024-05-14 23:25:28.434417] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.265 [2024-05-14 23:25:28.437166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.265 [2024-05-14 23:25:28.437257] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:05.523 23:25:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:05.523 23:25:28 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:10:05.523 23:25:28 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:05.781 I/O targets: 00:10:05.781 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:10:05.781 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:10:05.781 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:10:05.781 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:10:05.781 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:10:05.781 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:10:05.781 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:10:05.781 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:10:05.781 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:10:05.781 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:10:05.781 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:10:05.781 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:10:05.781 raid0: 131072 blocks of 512 bytes (64 MiB) 00:10:05.781 concat0: 131072 blocks of 512 bytes (64 MiB) 00:10:05.781 raid1: 65536 blocks of 512 bytes (32 MiB) 00:10:05.781 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:10:05.781 00:10:05.781 00:10:05.781 CUnit - A unit testing framework for C - Version 2.1-3 00:10:05.781 http://cunit.sourceforge.net/ 00:10:05.781 00:10:05.781 00:10:05.781 Suite: bdevio tests on: AIO0 00:10:05.781 Test: blockdev write read block ...passed 00:10:05.781 Test: blockdev write zeroes read block ...passed 00:10:05.781 Test: blockdev write zeroes read no split ...passed 00:10:05.781 Test: blockdev write zeroes read split ...passed 00:10:05.781 Test: blockdev write zeroes read split partial ...passed 00:10:05.781 Test: blockdev reset ...passed 00:10:05.781 Test: blockdev write read 8 blocks ...passed 00:10:05.781 Test: blockdev write read size > 128k ...passed 00:10:05.781 Test: blockdev write read invalid size ...passed 00:10:05.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:05.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:05.781 Test: blockdev write read max offset ...passed 00:10:05.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:05.781 Test: blockdev writev readv 8 blocks ...passed 00:10:05.781 Test: blockdev writev readv 30 x 1block ...passed 00:10:05.781 Test: blockdev writev readv block ...passed 00:10:05.781 Test: blockdev writev readv size > 128k ...passed 00:10:05.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:05.781 Test: blockdev comparev and writev ...passed 00:10:05.781 Test: blockdev nvme passthru rw ...passed 00:10:05.781 Test: blockdev nvme passthru vendor specific ...passed 00:10:05.781 Test: blockdev nvme admin passthru ...passed 00:10:05.781 Test: blockdev copy ...passed 00:10:05.781 Suite: bdevio tests on: raid1 00:10:05.781 Test: blockdev write read block ...passed 00:10:05.781 Test: blockdev write zeroes read block ...passed 00:10:05.781 Test: blockdev write zeroes read no split ...passed 00:10:05.781 Test: blockdev write zeroes read split ...passed 00:10:05.781 Test: blockdev write zeroes read split partial ...passed 00:10:05.781 Test: blockdev reset ...passed 00:10:05.781 Test: blockdev write read 8 blocks ...passed 00:10:05.781 Test: blockdev write read size > 128k ...passed 00:10:05.781 Test: blockdev write read invalid size ...passed 00:10:05.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:05.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:05.781 Test: blockdev write read max offset ...passed 00:10:05.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:05.781 Test: blockdev writev readv 8 blocks ...passed 00:10:05.782 Test: blockdev writev readv 30 x 1block ...passed 00:10:05.782 Test: blockdev writev readv block ...passed 00:10:05.782 Test: blockdev writev readv size > 128k ...passed 00:10:05.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:05.782 Test: blockdev comparev and writev ...passed 00:10:05.782 Test: blockdev nvme passthru rw ...passed 00:10:05.782 Test: blockdev nvme passthru vendor specific ...passed 00:10:05.782 Test: blockdev nvme admin passthru ...passed 00:10:05.782 Test: blockdev copy ...passed 00:10:05.782 Suite: bdevio tests on: concat0 00:10:05.782 Test: blockdev write read block ...passed 00:10:05.782 Test: blockdev write zeroes read block ...passed 00:10:05.782 Test: blockdev write zeroes read no split ...passed 00:10:06.040 Test: blockdev write zeroes read split ...passed 00:10:06.040 Test: blockdev write zeroes read split partial ...passed 00:10:06.040 Test: blockdev reset ...passed 00:10:06.040 Test: blockdev write read 8 blocks ...passed 00:10:06.040 Test: blockdev write read size > 128k ...passed 00:10:06.040 Test: blockdev write read invalid size ...passed 00:10:06.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.040 Test: blockdev write read max offset ...passed 00:10:06.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.040 Test: blockdev writev readv 8 blocks ...passed 00:10:06.040 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.040 Test: blockdev writev readv block ...passed 00:10:06.040 Test: blockdev writev readv size > 128k ...passed 00:10:06.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.040 Test: blockdev comparev and writev ...passed 00:10:06.040 Test: blockdev nvme passthru rw ...passed 00:10:06.040 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.040 Test: blockdev nvme admin passthru ...passed 00:10:06.040 Test: blockdev copy ...passed 00:10:06.040 Suite: bdevio tests on: raid0 00:10:06.040 Test: blockdev write read block ...passed 00:10:06.040 Test: blockdev write zeroes read block ...passed 00:10:06.040 Test: blockdev write zeroes read no split ...passed 00:10:06.040 Test: blockdev write zeroes read split ...passed 00:10:06.040 Test: blockdev write zeroes read split partial ...passed 00:10:06.040 Test: blockdev reset ...passed 00:10:06.040 Test: blockdev write read 8 blocks ...passed 00:10:06.040 Test: blockdev write read size > 128k ...passed 00:10:06.040 Test: blockdev write read invalid size ...passed 00:10:06.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.040 Test: blockdev write read max offset ...passed 00:10:06.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.040 Test: blockdev writev readv 8 blocks ...passed 00:10:06.040 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.040 Test: blockdev writev readv block ...passed 00:10:06.040 Test: blockdev writev readv size > 128k ...passed 00:10:06.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.040 Test: blockdev comparev and writev ...passed 00:10:06.040 Test: blockdev nvme passthru rw ...passed 00:10:06.040 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.040 Test: blockdev nvme admin passthru ...passed 00:10:06.040 Test: blockdev copy ...passed 00:10:06.040 Suite: bdevio tests on: TestPT 00:10:06.040 Test: blockdev write read block ...passed 00:10:06.040 Test: blockdev write zeroes read block ...passed 00:10:06.040 Test: blockdev write zeroes read no split ...passed 00:10:06.040 Test: blockdev write zeroes read split ...passed 00:10:06.040 Test: blockdev write zeroes read split partial ...passed 00:10:06.040 Test: blockdev reset ...passed 00:10:06.040 Test: blockdev write read 8 blocks ...passed 00:10:06.040 Test: blockdev write read size > 128k ...passed 00:10:06.040 Test: blockdev write read invalid size ...passed 00:10:06.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.040 Test: blockdev write read max offset ...passed 00:10:06.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.040 Test: blockdev writev readv 8 blocks ...passed 00:10:06.040 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.040 Test: blockdev writev readv block ...passed 00:10:06.040 Test: blockdev writev readv size > 128k ...passed 00:10:06.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.040 Test: blockdev comparev and writev ...passed 00:10:06.040 Test: blockdev nvme passthru rw ...passed 00:10:06.040 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.040 Test: blockdev nvme admin passthru ...passed 00:10:06.040 Test: blockdev copy ...passed 00:10:06.040 Suite: bdevio tests on: Malloc2p7 00:10:06.040 Test: blockdev write read block ...passed 00:10:06.040 Test: blockdev write zeroes read block ...passed 00:10:06.040 Test: blockdev write zeroes read no split ...passed 00:10:06.040 Test: blockdev write zeroes read split ...passed 00:10:06.298 Test: blockdev write zeroes read split partial ...passed 00:10:06.298 Test: blockdev reset ...passed 00:10:06.298 Test: blockdev write read 8 blocks ...passed 00:10:06.298 Test: blockdev write read size > 128k ...passed 00:10:06.298 Test: blockdev write read invalid size ...passed 00:10:06.298 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.298 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.298 Test: blockdev write read max offset ...passed 00:10:06.298 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.298 Test: blockdev writev readv 8 blocks ...passed 00:10:06.298 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.298 Test: blockdev writev readv block ...passed 00:10:06.298 Test: blockdev writev readv size > 128k ...passed 00:10:06.298 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.298 Test: blockdev comparev and writev ...passed 00:10:06.298 Test: blockdev nvme passthru rw ...passed 00:10:06.298 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.298 Test: blockdev nvme admin passthru ...passed 00:10:06.298 Test: blockdev copy ...passed 00:10:06.298 Suite: bdevio tests on: Malloc2p6 00:10:06.298 Test: blockdev write read block ...passed 00:10:06.298 Test: blockdev write zeroes read block ...passed 00:10:06.298 Test: blockdev write zeroes read no split ...passed 00:10:06.298 Test: blockdev write zeroes read split ...passed 00:10:06.298 Test: blockdev write zeroes read split partial ...passed 00:10:06.298 Test: blockdev reset ...passed 00:10:06.298 Test: blockdev write read 8 blocks ...passed 00:10:06.298 Test: blockdev write read size > 128k ...passed 00:10:06.298 Test: blockdev write read invalid size ...passed 00:10:06.298 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.298 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.298 Test: blockdev write read max offset ...passed 00:10:06.298 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.298 Test: blockdev writev readv 8 blocks ...passed 00:10:06.298 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.298 Test: blockdev writev readv block ...passed 00:10:06.298 Test: blockdev writev readv size > 128k ...passed 00:10:06.298 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.298 Test: blockdev comparev and writev ...passed 00:10:06.298 Test: blockdev nvme passthru rw ...passed 00:10:06.298 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.298 Test: blockdev nvme admin passthru ...passed 00:10:06.298 Test: blockdev copy ...passed 00:10:06.298 Suite: bdevio tests on: Malloc2p5 00:10:06.298 Test: blockdev write read block ...passed 00:10:06.298 Test: blockdev write zeroes read block ...passed 00:10:06.298 Test: blockdev write zeroes read no split ...passed 00:10:06.298 Test: blockdev write zeroes read split ...passed 00:10:06.298 Test: blockdev write zeroes read split partial ...passed 00:10:06.298 Test: blockdev reset ...passed 00:10:06.298 Test: blockdev write read 8 blocks ...passed 00:10:06.298 Test: blockdev write read size > 128k ...passed 00:10:06.298 Test: blockdev write read invalid size ...passed 00:10:06.298 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.298 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.298 Test: blockdev write read max offset ...passed 00:10:06.298 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.298 Test: blockdev writev readv 8 blocks ...passed 00:10:06.299 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.299 Test: blockdev writev readv block ...passed 00:10:06.299 Test: blockdev writev readv size > 128k ...passed 00:10:06.299 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.299 Test: blockdev comparev and writev ...passed 00:10:06.299 Test: blockdev nvme passthru rw ...passed 00:10:06.299 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.299 Test: blockdev nvme admin passthru ...passed 00:10:06.299 Test: blockdev copy ...passed 00:10:06.299 Suite: bdevio tests on: Malloc2p4 00:10:06.299 Test: blockdev write read block ...passed 00:10:06.299 Test: blockdev write zeroes read block ...passed 00:10:06.299 Test: blockdev write zeroes read no split ...passed 00:10:06.299 Test: blockdev write zeroes read split ...passed 00:10:06.299 Test: blockdev write zeroes read split partial ...passed 00:10:06.299 Test: blockdev reset ...passed 00:10:06.299 Test: blockdev write read 8 blocks ...passed 00:10:06.299 Test: blockdev write read size > 128k ...passed 00:10:06.299 Test: blockdev write read invalid size ...passed 00:10:06.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.299 Test: blockdev write read max offset ...passed 00:10:06.299 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.299 Test: blockdev writev readv 8 blocks ...passed 00:10:06.299 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.299 Test: blockdev writev readv block ...passed 00:10:06.299 Test: blockdev writev readv size > 128k ...passed 00:10:06.299 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.299 Test: blockdev comparev and writev ...passed 00:10:06.299 Test: blockdev nvme passthru rw ...passed 00:10:06.299 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.299 Test: blockdev nvme admin passthru ...passed 00:10:06.299 Test: blockdev copy ...passed 00:10:06.299 Suite: bdevio tests on: Malloc2p3 00:10:06.299 Test: blockdev write read block ...passed 00:10:06.299 Test: blockdev write zeroes read block ...passed 00:10:06.299 Test: blockdev write zeroes read no split ...passed 00:10:06.299 Test: blockdev write zeroes read split ...passed 00:10:06.299 Test: blockdev write zeroes read split partial ...passed 00:10:06.299 Test: blockdev reset ...passed 00:10:06.299 Test: blockdev write read 8 blocks ...passed 00:10:06.299 Test: blockdev write read size > 128k ...passed 00:10:06.299 Test: blockdev write read invalid size ...passed 00:10:06.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.299 Test: blockdev write read max offset ...passed 00:10:06.299 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.299 Test: blockdev writev readv 8 blocks ...passed 00:10:06.299 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.299 Test: blockdev writev readv block ...passed 00:10:06.299 Test: blockdev writev readv size > 128k ...passed 00:10:06.299 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.299 Test: blockdev comparev and writev ...passed 00:10:06.299 Test: blockdev nvme passthru rw ...passed 00:10:06.299 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.299 Test: blockdev nvme admin passthru ...passed 00:10:06.299 Test: blockdev copy ...passed 00:10:06.299 Suite: bdevio tests on: Malloc2p2 00:10:06.299 Test: blockdev write read block ...passed 00:10:06.299 Test: blockdev write zeroes read block ...passed 00:10:06.299 Test: blockdev write zeroes read no split ...passed 00:10:06.561 Test: blockdev write zeroes read split ...passed 00:10:06.561 Test: blockdev write zeroes read split partial ...passed 00:10:06.561 Test: blockdev reset ...passed 00:10:06.561 Test: blockdev write read 8 blocks ...passed 00:10:06.561 Test: blockdev write read size > 128k ...passed 00:10:06.561 Test: blockdev write read invalid size ...passed 00:10:06.561 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.561 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.561 Test: blockdev write read max offset ...passed 00:10:06.561 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.561 Test: blockdev writev readv 8 blocks ...passed 00:10:06.561 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.561 Test: blockdev writev readv block ...passed 00:10:06.561 Test: blockdev writev readv size > 128k ...passed 00:10:06.561 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.561 Test: blockdev comparev and writev ...passed 00:10:06.561 Test: blockdev nvme passthru rw ...passed 00:10:06.561 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.561 Test: blockdev nvme admin passthru ...passed 00:10:06.561 Test: blockdev copy ...passed 00:10:06.561 Suite: bdevio tests on: Malloc2p1 00:10:06.561 Test: blockdev write read block ...passed 00:10:06.561 Test: blockdev write zeroes read block ...passed 00:10:06.561 Test: blockdev write zeroes read no split ...passed 00:10:06.561 Test: blockdev write zeroes read split ...passed 00:10:06.561 Test: blockdev write zeroes read split partial ...passed 00:10:06.561 Test: blockdev reset ...passed 00:10:06.561 Test: blockdev write read 8 blocks ...passed 00:10:06.561 Test: blockdev write read size > 128k ...passed 00:10:06.561 Test: blockdev write read invalid size ...passed 00:10:06.561 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.561 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.561 Test: blockdev write read max offset ...passed 00:10:06.561 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.561 Test: blockdev writev readv 8 blocks ...passed 00:10:06.561 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.561 Test: blockdev writev readv block ...passed 00:10:06.561 Test: blockdev writev readv size > 128k ...passed 00:10:06.561 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.561 Test: blockdev comparev and writev ...passed 00:10:06.561 Test: blockdev nvme passthru rw ...passed 00:10:06.561 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.561 Test: blockdev nvme admin passthru ...passed 00:10:06.561 Test: blockdev copy ...passed 00:10:06.561 Suite: bdevio tests on: Malloc2p0 00:10:06.561 Test: blockdev write read block ...passed 00:10:06.561 Test: blockdev write zeroes read block ...passed 00:10:06.561 Test: blockdev write zeroes read no split ...passed 00:10:06.561 Test: blockdev write zeroes read split ...passed 00:10:06.561 Test: blockdev write zeroes read split partial ...passed 00:10:06.561 Test: blockdev reset ...passed 00:10:06.561 Test: blockdev write read 8 blocks ...passed 00:10:06.561 Test: blockdev write read size > 128k ...passed 00:10:06.561 Test: blockdev write read invalid size ...passed 00:10:06.561 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.561 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.561 Test: blockdev write read max offset ...passed 00:10:06.561 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.561 Test: blockdev writev readv 8 blocks ...passed 00:10:06.561 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.561 Test: blockdev writev readv block ...passed 00:10:06.561 Test: blockdev writev readv size > 128k ...passed 00:10:06.561 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.561 Test: blockdev comparev and writev ...passed 00:10:06.561 Test: blockdev nvme passthru rw ...passed 00:10:06.561 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.561 Test: blockdev nvme admin passthru ...passed 00:10:06.561 Test: blockdev copy ...passed 00:10:06.561 Suite: bdevio tests on: Malloc1p1 00:10:06.561 Test: blockdev write read block ...passed 00:10:06.561 Test: blockdev write zeroes read block ...passed 00:10:06.561 Test: blockdev write zeroes read no split ...passed 00:10:06.561 Test: blockdev write zeroes read split ...passed 00:10:06.561 Test: blockdev write zeroes read split partial ...passed 00:10:06.561 Test: blockdev reset ...passed 00:10:06.561 Test: blockdev write read 8 blocks ...passed 00:10:06.561 Test: blockdev write read size > 128k ...passed 00:10:06.561 Test: blockdev write read invalid size ...passed 00:10:06.561 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.561 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.561 Test: blockdev write read max offset ...passed 00:10:06.561 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.561 Test: blockdev writev readv 8 blocks ...passed 00:10:06.561 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.561 Test: blockdev writev readv block ...passed 00:10:06.561 Test: blockdev writev readv size > 128k ...passed 00:10:06.562 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.562 Test: blockdev comparev and writev ...passed 00:10:06.562 Test: blockdev nvme passthru rw ...passed 00:10:06.562 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.562 Test: blockdev nvme admin passthru ...passed 00:10:06.562 Test: blockdev copy ...passed 00:10:06.562 Suite: bdevio tests on: Malloc1p0 00:10:06.562 Test: blockdev write read block ...passed 00:10:06.562 Test: blockdev write zeroes read block ...passed 00:10:06.562 Test: blockdev write zeroes read no split ...passed 00:10:06.562 Test: blockdev write zeroes read split ...passed 00:10:06.562 Test: blockdev write zeroes read split partial ...passed 00:10:06.562 Test: blockdev reset ...passed 00:10:06.562 Test: blockdev write read 8 blocks ...passed 00:10:06.562 Test: blockdev write read size > 128k ...passed 00:10:06.562 Test: blockdev write read invalid size ...passed 00:10:06.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.819 Test: blockdev write read max offset ...passed 00:10:06.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.819 Test: blockdev writev readv 8 blocks ...passed 00:10:06.819 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.819 Test: blockdev writev readv block ...passed 00:10:06.819 Test: blockdev writev readv size > 128k ...passed 00:10:06.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.820 Test: blockdev comparev and writev ...passed 00:10:06.820 Test: blockdev nvme passthru rw ...passed 00:10:06.820 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.820 Test: blockdev nvme admin passthru ...passed 00:10:06.820 Test: blockdev copy ...passed 00:10:06.820 Suite: bdevio tests on: Malloc0 00:10:06.820 Test: blockdev write read block ...passed 00:10:06.820 Test: blockdev write zeroes read block ...passed 00:10:06.820 Test: blockdev write zeroes read no split ...passed 00:10:06.820 Test: blockdev write zeroes read split ...passed 00:10:06.820 Test: blockdev write zeroes read split partial ...passed 00:10:06.820 Test: blockdev reset ...passed 00:10:06.820 Test: blockdev write read 8 blocks ...passed 00:10:06.820 Test: blockdev write read size > 128k ...passed 00:10:06.820 Test: blockdev write read invalid size ...passed 00:10:06.820 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:06.820 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:06.820 Test: blockdev write read max offset ...passed 00:10:06.820 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:06.820 Test: blockdev writev readv 8 blocks ...passed 00:10:06.820 Test: blockdev writev readv 30 x 1block ...passed 00:10:06.820 Test: blockdev writev readv block ...passed 00:10:06.820 Test: blockdev writev readv size > 128k ...passed 00:10:06.820 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:06.820 Test: blockdev comparev and writev ...passed 00:10:06.820 Test: blockdev nvme passthru rw ...passed 00:10:06.820 Test: blockdev nvme passthru vendor specific ...passed 00:10:06.820 Test: blockdev nvme admin passthru ...passed 00:10:06.820 Test: blockdev copy ...passed 00:10:06.820 00:10:06.820 Run Summary: Type Total Ran Passed Failed Inactive 00:10:06.820 suites 16 16 n/a 0 0 00:10:06.820 tests 368 368 368 0 0 00:10:06.820 asserts 2224 2224 2224 0 n/a 00:10:06.820 00:10:06.820 Elapsed time = 2.970 seconds 00:10:06.820 0 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 50845 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 50845 ']' 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 50845 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50845 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50845' 00:10:06.820 killing process with pid 50845 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@965 -- # kill 50845 00:10:06.820 23:25:29 blockdev_general.bdev_bounds -- common/autotest_common.sh@970 -- # wait 50845 00:10:08.721 ************************************ 00:10:08.721 END TEST bdev_bounds 00:10:08.721 ************************************ 00:10:08.721 23:25:31 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:10:08.721 00:10:08.721 real 0m4.472s 00:10:08.721 user 0m11.362s 00:10:08.721 sys 0m0.491s 00:10:08.721 23:25:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:08.721 23:25:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:08.721 23:25:31 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:10:08.721 23:25:31 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:08.721 23:25:31 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:08.721 23:25:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:08.721 ************************************ 00:10:08.721 START TEST bdev_nbd 00:10:08.721 ************************************ 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # modprobe -q nbd nbds_max=16 00:10:08.721 ************************************ 00:10:08.721 END TEST bdev_nbd 00:10:08.721 ************************************ 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # return 0 00:10:08.721 00:10:08.721 real 0m0.008s 00:10:08.721 user 0m0.001s 00:10:08.721 sys 0m0.007s 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:08.721 23:25:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:08.721 23:25:31 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:10:08.721 23:25:31 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:10:08.721 23:25:31 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:10:08.721 23:25:31 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:10:08.721 23:25:31 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:08.721 23:25:31 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:08.721 23:25:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:08.721 ************************************ 00:10:08.721 START TEST bdev_fio 00:10:08.721 ************************************ 00:10:08.721 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:10:08.721 23:25:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:08.980 23:25:32 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:10:08.980 ************************************ 00:10:08.980 START TEST bdev_fio_rw_verify 00:10:08.980 ************************************ 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=(libasan libclang_rt.asan) 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/lib64/libasan.so.6 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /lib64/libasan.so.6 ]] 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:10:08.980 23:25:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:09.240 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:09.240 fio-3.35 00:10:09.240 Starting 16 threads 00:10:21.440 00:10:21.440 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=51011: Tue May 14 23:25:44 2024 00:10:21.440 read: IOPS=107k, BW=417MiB/s (437MB/s)(4176MiB/10011msec) 00:10:21.440 slat (nsec): min=838, max=56024k, avg=10419.81, stdev=173086.84 00:10:21.440 clat (usec): min=4, max=62603, avg=116.78, stdev=656.76 00:10:21.440 lat (usec): min=9, max=62605, avg=127.20, stdev=678.95 00:10:21.440 clat percentiles (usec): 00:10:21.440 | 50.000th=[ 68], 99.000th=[ 725], 99.900th=[11600], 99.990th=[23200], 00:10:21.440 | 99.999th=[55837] 00:10:21.440 write: IOPS=171k, BW=667MiB/s (700MB/s)(6669MiB/9992msec); 0 zone resets 00:10:21.440 slat (usec): min=3, max=145660, avg=60.35, stdev=983.58 00:10:21.440 clat (usec): min=4, max=145832, avg=293.01, stdev=1898.16 00:10:21.440 lat (usec): min=23, max=145851, avg=353.37, stdev=2141.34 00:10:21.440 clat percentiles (usec): 00:10:21.440 | 50.000th=[ 116], 99.000th=[ 5997], 99.900th=[ 27919], 00:10:21.440 | 99.990th=[ 65274], 99.999th=[101188] 00:10:21.440 bw ( KiB/s): min=436150, max=947695, per=98.68%, avg=674449.47, stdev=9038.50, samples=304 00:10:21.440 iops : min=109037, max=236921, avg=168608.89, stdev=2259.62, samples=304 00:10:21.440 lat (usec) : 10=0.01%, 20=0.65%, 50=19.60%, 100=37.36%, 250=37.06% 00:10:21.440 lat (usec) : 500=2.00%, 750=1.96%, 1000=0.21% 00:10:21.440 lat (msec) : 2=0.18%, 4=0.15%, 10=0.32%, 20=0.38%, 50=0.09% 00:10:21.440 lat (msec) : 100=0.01%, 250=0.01% 00:10:21.440 cpu : usr=52.73%, sys=1.01%, ctx=19696, majf=0, minf=118210 00:10:21.440 IO depths : 1=12.4%, 2=24.7%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.440 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.440 issued rwts: total=1068963,1707329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.440 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:21.440 00:10:21.440 Run status group 0 (all jobs): 00:10:21.440 READ: bw=417MiB/s (437MB/s), 417MiB/s-417MiB/s (437MB/s-437MB/s), io=4176MiB (4378MB), run=10011-10011msec 00:10:21.440 WRITE: bw=667MiB/s (700MB/s), 667MiB/s-667MiB/s (700MB/s-700MB/s), io=6669MiB (6993MB), run=9992-9992msec 00:10:23.974 ----------------------------------------------------- 00:10:23.974 Suppressions used: 00:10:23.974 count bytes template 00:10:23.974 16 140 /usr/src/fio/parse.c 00:10:23.974 12133 1164768 /usr/src/fio/iolog.c 00:10:23.974 2 596 libcrypto.so 00:10:23.974 ----------------------------------------------------- 00:10:23.974 00:10:23.974 ************************************ 00:10:23.974 END TEST bdev_fio_rw_verify 00:10:23.974 ************************************ 00:10:23.974 00:10:23.974 real 0m14.825s 00:10:23.974 user 1m35.892s 00:10:23.974 sys 0m2.205s 00:10:23.974 23:25:46 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.974 23:25:46 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:10:23.974 23:25:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:10:23.975 23:25:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f442ba9f-ccc2-4d92-8fa0-ac184693b38b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f442ba9f-ccc2-4d92-8fa0-ac184693b38b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d2fb27c9-99bf-5d6c-9960-94035c73ed9c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d2fb27c9-99bf-5d6c-9960-94035c73ed9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1cb5255c-b42b-5bef-a564-33828aece1a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1cb5255c-b42b-5bef-a564-33828aece1a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a12d902d-84e0-5865-9b16-3b269e19343d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a12d902d-84e0-5865-9b16-3b269e19343d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "6f6bc59b-8c96-54ba-90fb-c9e26db5d6e5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6f6bc59b-8c96-54ba-90fb-c9e26db5d6e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8f6f331e-9e3c-5729-9e83-986d2bf9dcec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8f6f331e-9e3c-5729-9e83-986d2bf9dcec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5f656d60-afd8-5fc4-8f8c-353caaeb5b2d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5f656d60-afd8-5fc4-8f8c-353caaeb5b2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8c779e5d-f475-5f72-a3bc-abc50a3f4996"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8c779e5d-f475-5f72-a3bc-abc50a3f4996",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "43a3a3e6-0868-56c7-83e0-e582e4ed2f66"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "43a3a3e6-0868-56c7-83e0-e582e4ed2f66",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7ce5efe0-ff79-5a39-bf01-53ee4d40a7e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ce5efe0-ff79-5a39-bf01-53ee4d40a7e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "4443b631-c2a0-5f5d-83c3-3735a82f6cd0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4443b631-c2a0-5f5d-83c3-3735a82f6cd0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f2ef3380-d530-596c-ad06-96e22edc04da"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f2ef3380-d530-596c-ad06-96e22edc04da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5f27ad5a-8e1c-4a19-995f-56503e585de6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5f27ad5a-8e1c-4a19-995f-56503e585de6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5f27ad5a-8e1c-4a19-995f-56503e585de6",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c212db74-15ac-45a5-bbde-6cc22ee5aaa3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f0ca39b8-4630-4c2d-9bc8-11992242e63f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "dca1f9e6-ca25-4d9e-98de-30a53b88c0be"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dca1f9e6-ca25-4d9e-98de-30a53b88c0be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "dca1f9e6-ca25-4d9e-98de-30a53b88c0be",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7bf915e1-5c5e-491e-9394-6b365d084a35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a1304eaa-01e2-4a97-8aa6-f5da0f7f6f91",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d4682cb6-c0af-4980-8b8a-19fc36e7caec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "986b1f38-fd6c-4dc6-b215-f9adabb003d7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9f49edbd-9403-4e1b-8035-04a27cfa29f7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9f49edbd-9403-4e1b-8035-04a27cfa29f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:10:23.975 23:25:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:10:23.975 Malloc1p0 00:10:23.975 Malloc1p1 00:10:23.975 Malloc2p0 00:10:23.975 Malloc2p1 00:10:23.975 Malloc2p2 00:10:23.975 Malloc2p3 00:10:23.975 Malloc2p4 00:10:23.975 Malloc2p5 00:10:23.975 Malloc2p6 00:10:23.975 Malloc2p7 00:10:23.975 TestPT 00:10:23.975 raid0 00:10:23.975 concat0 ]] 00:10:23.975 23:25:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:10:23.976 23:25:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f442ba9f-ccc2-4d92-8fa0-ac184693b38b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f442ba9f-ccc2-4d92-8fa0-ac184693b38b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d2fb27c9-99bf-5d6c-9960-94035c73ed9c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d2fb27c9-99bf-5d6c-9960-94035c73ed9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1cb5255c-b42b-5bef-a564-33828aece1a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1cb5255c-b42b-5bef-a564-33828aece1a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a12d902d-84e0-5865-9b16-3b269e19343d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a12d902d-84e0-5865-9b16-3b269e19343d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "6f6bc59b-8c96-54ba-90fb-c9e26db5d6e5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6f6bc59b-8c96-54ba-90fb-c9e26db5d6e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8f6f331e-9e3c-5729-9e83-986d2bf9dcec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8f6f331e-9e3c-5729-9e83-986d2bf9dcec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5f656d60-afd8-5fc4-8f8c-353caaeb5b2d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5f656d60-afd8-5fc4-8f8c-353caaeb5b2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8c779e5d-f475-5f72-a3bc-abc50a3f4996"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8c779e5d-f475-5f72-a3bc-abc50a3f4996",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "43a3a3e6-0868-56c7-83e0-e582e4ed2f66"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "43a3a3e6-0868-56c7-83e0-e582e4ed2f66",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "7ce5efe0-ff79-5a39-bf01-53ee4d40a7e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ce5efe0-ff79-5a39-bf01-53ee4d40a7e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "4443b631-c2a0-5f5d-83c3-3735a82f6cd0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4443b631-c2a0-5f5d-83c3-3735a82f6cd0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f2ef3380-d530-596c-ad06-96e22edc04da"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f2ef3380-d530-596c-ad06-96e22edc04da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5f27ad5a-8e1c-4a19-995f-56503e585de6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5f27ad5a-8e1c-4a19-995f-56503e585de6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5f27ad5a-8e1c-4a19-995f-56503e585de6",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c212db74-15ac-45a5-bbde-6cc22ee5aaa3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "f0ca39b8-4630-4c2d-9bc8-11992242e63f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "dca1f9e6-ca25-4d9e-98de-30a53b88c0be"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dca1f9e6-ca25-4d9e-98de-30a53b88c0be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "dca1f9e6-ca25-4d9e-98de-30a53b88c0be",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7bf915e1-5c5e-491e-9394-6b365d084a35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a1304eaa-01e2-4a97-8aa6-f5da0f7f6f91",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "52fb92bc-78fd-48e9-b0e7-caeb077c8ebf",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d4682cb6-c0af-4980-8b8a-19fc36e7caec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "986b1f38-fd6c-4dc6-b215-f9adabb003d7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9f49edbd-9403-4e1b-8035-04a27cfa29f7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9f49edbd-9403-4e1b-8035-04a27cfa29f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.976 23:25:47 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:10:23.977 ************************************ 00:10:23.977 START TEST bdev_fio_trim 00:10:23.977 ************************************ 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=(libasan libclang_rt.asan) 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib=/lib64/libasan.so.6 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n /lib64/libasan.so.6 ]] 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # break 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:10:23.977 23:25:47 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:10:24.235 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:10:24.235 fio-3.35 00:10:24.235 Starting 14 threads 00:10:36.434 00:10:36.434 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=51240: Tue May 14 23:25:59 2024 00:10:36.434 write: IOPS=341k, BW=1334MiB/s (1398MB/s)(13.0GiB/10007msec); 0 zone resets 00:10:36.434 slat (nsec): min=877, max=35027k, avg=12791.34, stdev=211849.00 00:10:36.434 clat (usec): min=7, max=44113, avg=120.68, stdev=730.27 00:10:36.434 lat (usec): min=9, max=44123, avg=133.47, stdev=760.01 00:10:36.434 clat percentiles (usec): 00:10:36.434 | 50.000th=[ 69], 99.000th=[ 734], 99.900th=[13304], 99.990th=[22152], 00:10:36.434 | 99.999th=[32113] 00:10:36.434 bw ( MiB/s): min= 872, max= 1989, per=98.99%, avg=1320.09, stdev=26.64, samples=266 00:10:36.434 iops : min=223354, max=509276, avg=337939.89, stdev=6818.74, samples=266 00:10:36.434 trim: IOPS=341k, BW=1334MiB/s (1398MB/s)(13.0GiB/10007msec); 0 zone resets 00:10:36.434 slat (nsec): min=1606, max=44017k, avg=9655.99, stdev=191826.72 00:10:36.434 clat (nsec): min=1650, max=44124k, avg=102482.37, stdev=600483.43 00:10:36.434 lat (usec): min=4, max=44130, avg=112.14, stdev=630.33 00:10:36.434 clat percentiles (usec): 00:10:36.434 | 50.000th=[ 77], 99.000th=[ 125], 99.900th=[13042], 99.990th=[21103], 00:10:36.434 | 99.999th=[27132] 00:10:36.434 bw ( MiB/s): min= 872, max= 1989, per=98.99%, avg=1320.10, stdev=26.64, samples=266 00:10:36.434 iops : min=223354, max=509288, avg=337940.84, stdev=6818.62, samples=266 00:10:36.434 lat (usec) : 2=0.01%, 4=0.01%, 10=0.28%, 20=0.58%, 50=17.63% 00:10:36.434 lat (usec) : 100=66.23%, 250=13.55%, 500=0.63%, 750=0.57%, 1000=0.26% 00:10:36.434 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.21%, 50=0.01% 00:10:36.434 cpu : usr=72.45%, sys=0.00%, ctx=5823, majf=0, minf=851 00:10:36.434 IO depths : 1=12.3%, 2=24.6%, 4=50.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.434 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.434 issued rwts: total=0,3416379,3416388,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.434 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:36.434 00:10:36.434 Run status group 0 (all jobs): 00:10:36.434 WRITE: bw=1334MiB/s (1398MB/s), 1334MiB/s-1334MiB/s (1398MB/s-1398MB/s), io=13.0GiB (14.0GB), run=10007-10007msec 00:10:36.434 TRIM: bw=1334MiB/s (1398MB/s), 1334MiB/s-1334MiB/s (1398MB/s-1398MB/s), io=13.0GiB (14.0GB), run=10007-10007msec 00:10:38.964 ----------------------------------------------------- 00:10:38.964 Suppressions used: 00:10:38.964 count bytes template 00:10:38.964 14 129 /usr/src/fio/parse.c 00:10:38.964 2 596 libcrypto.so 00:10:38.964 ----------------------------------------------------- 00:10:38.964 00:10:38.964 ************************************ 00:10:38.964 END TEST bdev_fio_trim 00:10:38.964 ************************************ 00:10:38.964 00:10:38.964 real 0m14.856s 00:10:38.964 user 1m50.259s 00:10:38.964 sys 0m0.470s 00:10:38.964 23:26:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:38.964 23:26:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:10:38.964 23:26:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:10:38.964 23:26:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:10:38.964 23:26:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:10:38.964 /home/vagrant/spdk_repo/spdk 00:10:38.964 23:26:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:10:38.964 00:10:38.964 real 0m30.056s 00:10:38.964 user 3m26.313s 00:10:38.964 sys 0m2.791s 00:10:38.964 23:26:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:38.964 ************************************ 00:10:38.964 END TEST bdev_fio 00:10:38.964 ************************************ 00:10:38.964 23:26:01 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:10:38.964 23:26:01 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:38.964 23:26:01 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:38.964 23:26:01 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:10:38.964 23:26:01 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.964 23:26:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:38.964 ************************************ 00:10:38.964 START TEST bdev_verify 00:10:38.964 ************************************ 00:10:38.964 23:26:01 blockdev_general.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:38.964 [2024-05-14 23:26:02.124409] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:10:38.964 [2024-05-14 23:26:02.124669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51430 ] 00:10:39.221 [2024-05-14 23:26:02.291687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.479 [2024-05-14 23:26:02.556866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.479 [2024-05-14 23:26:02.556871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.737 [2024-05-14 23:26:03.013195] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:39.737 [2024-05-14 23:26:03.013325] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:39.737 [2024-05-14 23:26:03.021096] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:39.737 [2024-05-14 23:26:03.021158] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:39.996 [2024-05-14 23:26:03.029119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:39.996 [2024-05-14 23:26:03.029198] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:39.996 [2024-05-14 23:26:03.029266] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:39.996 [2024-05-14 23:26:03.209352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:39.996 [2024-05-14 23:26:03.209460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.996 [2024-05-14 23:26:03.209517] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002be80 00:10:39.996 [2024-05-14 23:26:03.209541] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.996 [2024-05-14 23:26:03.211443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.996 [2024-05-14 23:26:03.211489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:40.562 Running I/O for 5 seconds... 00:10:45.839 00:10:45.839 Latency(us) 00:10:45.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.839 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x1000 00:10:45.839 Malloc0 : 5.09 2405.17 9.40 0.00 0.00 53153.42 363.05 110577.11 00:10:45.839 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x1000 length 0x1000 00:10:45.839 Malloc0 : 5.10 2113.54 8.26 0.00 0.00 60341.87 48.41 164912.41 00:10:45.839 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x800 00:10:45.839 Malloc1p0 : 5.12 1275.92 4.98 0.00 0.00 100066.20 1288.38 108193.98 00:10:45.839 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x800 length 0x800 00:10:45.839 Malloc1p0 : 5.05 1291.89 5.05 0.00 0.00 98833.82 889.95 86269.21 00:10:45.839 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x800 00:10:45.839 Malloc1p1 : 5.12 1275.66 4.98 0.00 0.00 99966.85 1206.46 108193.98 00:10:45.839 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x800 length 0x800 00:10:45.839 Malloc1p1 : 5.05 1291.67 5.05 0.00 0.00 98750.49 942.08 85792.58 00:10:45.839 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x200 00:10:45.839 Malloc2p0 : 5.12 1275.40 4.98 0.00 0.00 99860.19 1340.51 107240.73 00:10:45.839 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x200 length 0x200 00:10:45.839 Malloc2p0 : 5.05 1291.49 5.04 0.00 0.00 98664.18 975.59 84839.33 00:10:45.839 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x200 00:10:45.839 Malloc2p1 : 5.12 1275.17 4.98 0.00 0.00 99735.01 1683.08 105810.85 00:10:45.839 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x200 length 0x200 00:10:45.839 Malloc2p1 : 5.06 1291.28 5.04 0.00 0.00 98571.59 1131.99 83886.08 00:10:45.839 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x200 00:10:45.839 Malloc2p2 : 5.12 1274.91 4.98 0.00 0.00 99610.46 1362.85 104380.97 00:10:45.839 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x200 length 0x200 00:10:45.839 Malloc2p2 : 5.06 1291.09 5.04 0.00 0.00 98477.73 1459.67 82932.83 00:10:45.839 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x200 00:10:45.839 Malloc2p3 : 5.12 1274.67 4.98 0.00 0.00 99502.29 1228.80 103904.35 00:10:45.839 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x200 length 0x200 00:10:45.839 Malloc2p3 : 5.12 1300.85 5.08 0.00 0.00 97622.95 1094.75 81979.58 00:10:45.839 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x200 00:10:45.839 Malloc2p4 : 5.12 1274.41 4.98 0.00 0.00 99392.00 1243.69 102951.10 00:10:45.839 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x200 length 0x200 00:10:45.839 Malloc2p4 : 5.12 1300.61 5.08 0.00 0.00 97544.77 901.12 81979.58 00:10:45.839 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x0 length 0x200 00:10:45.839 Malloc2p5 : 5.12 1274.17 4.98 0.00 0.00 99278.26 1489.45 101997.85 00:10:45.839 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.839 Verification LBA range: start 0x200 length 0x200 00:10:45.840 Malloc2p5 : 5.12 1300.36 5.08 0.00 0.00 97465.75 1005.38 81979.58 00:10:45.840 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x0 length 0x200 00:10:45.840 Malloc2p6 : 5.12 1273.97 4.98 0.00 0.00 99153.56 1072.41 101044.60 00:10:45.840 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x200 length 0x200 00:10:45.840 Malloc2p6 : 5.12 1300.09 5.08 0.00 0.00 97379.06 916.01 81979.58 00:10:45.840 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x0 length 0x200 00:10:45.840 Malloc2p7 : 5.12 1273.80 4.98 0.00 0.00 99044.73 1705.43 97708.22 00:10:45.840 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x200 length 0x200 00:10:45.840 Malloc2p7 : 5.12 1299.84 5.08 0.00 0.00 97309.32 1251.14 81502.95 00:10:45.840 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x0 length 0x1000 00:10:45.840 TestPT : 5.13 1273.64 4.98 0.00 0.00 98905.42 1437.32 94371.84 00:10:45.840 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x1000 length 0x1000 00:10:45.840 TestPT : 5.12 1279.85 5.00 0.00 0.00 98534.85 5093.93 81026.33 00:10:45.840 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x0 length 0x2000 00:10:45.840 raid0 : 5.13 1273.45 4.97 0.00 0.00 98782.65 1228.80 91512.09 00:10:45.840 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x2000 length 0x2000 00:10:45.840 raid0 : 5.12 1299.53 5.08 0.00 0.00 97092.52 1556.48 75783.45 00:10:45.840 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x0 length 0x2000 00:10:45.840 concat0 : 5.13 1273.24 4.97 0.00 0.00 98679.69 1251.14 91035.46 00:10:45.840 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x2000 length 0x2000 00:10:45.840 concat0 : 5.12 1299.30 5.08 0.00 0.00 96979.75 1184.12 74830.20 00:10:45.840 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x0 length 0x1000 00:10:45.840 raid1 : 5.13 1273.04 4.97 0.00 0.00 98570.75 1496.90 88652.33 00:10:45.840 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x1000 length 0x1000 00:10:45.840 raid1 : 5.12 1299.07 5.07 0.00 0.00 96875.12 1280.93 76260.07 00:10:45.840 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x0 length 0x4e2 00:10:45.840 AIO0 : 5.13 1272.60 4.97 0.00 0.00 98446.23 536.20 88652.33 00:10:45.840 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.840 Verification LBA range: start 0x4e2 length 0x4e2 00:10:45.840 AIO0 : 5.12 1278.90 5.00 0.00 0.00 98130.55 5123.72 76260.07 00:10:45.840 =================================================================================================================== 00:10:45.840 Total : 43048.56 168.16 0.00 0.00 94165.00 48.41 164912.41 00:10:47.776 00:10:47.776 real 0m9.046s 00:10:47.776 user 0m16.239s 00:10:47.776 sys 0m0.645s 00:10:47.776 23:26:11 blockdev_general.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:47.776 ************************************ 00:10:47.776 END TEST bdev_verify 00:10:47.776 ************************************ 00:10:47.776 23:26:11 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:48.034 23:26:11 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:48.034 23:26:11 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:10:48.034 23:26:11 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:48.034 23:26:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:48.034 ************************************ 00:10:48.034 START TEST bdev_verify_big_io 00:10:48.034 ************************************ 00:10:48.034 23:26:11 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:48.034 [2024-05-14 23:26:11.224604] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:10:48.034 [2024-05-14 23:26:11.224810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51564 ] 00:10:48.292 [2024-05-14 23:26:11.378861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.552 [2024-05-14 23:26:11.630164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.552 [2024-05-14 23:26:11.630172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.811 [2024-05-14 23:26:12.078663] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:48.811 [2024-05-14 23:26:12.078789] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:48.811 [2024-05-14 23:26:12.086638] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:48.811 [2024-05-14 23:26:12.086698] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:48.811 [2024-05-14 23:26:12.094665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:48.811 [2024-05-14 23:26:12.094729] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:48.811 [2024-05-14 23:26:12.094780] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:49.069 [2024-05-14 23:26:12.272377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:49.069 [2024-05-14 23:26:12.272486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.069 [2024-05-14 23:26:12.272549] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002be80 00:10:49.069 [2024-05-14 23:26:12.272574] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.069 [2024-05-14 23:26:12.274484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.069 [2024-05-14 23:26:12.274528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:49.327 [2024-05-14 23:26:12.610620] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:49.327 [2024-05-14 23:26:12.614068] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.617855] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.621800] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.625051] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.628841] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.632395] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.636325] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.639705] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.643664] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.646963] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.650893] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.654900] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.658063] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.661945] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.665356] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:49.584 [2024-05-14 23:26:12.751369] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:49.584 [2024-05-14 23:26:12.758047] bdevperf.c:1831:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:49.584 Running I/O for 5 seconds... 00:10:56.154 00:10:56.154 Latency(us) 00:10:56.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.154 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x100 00:10:56.154 Malloc0 : 5.43 447.68 27.98 0.00 0.00 282767.31 348.16 991380.95 00:10:56.154 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x100 length 0x100 00:10:56.154 Malloc0 : 5.32 457.20 28.57 0.00 0.00 276894.85 355.61 1021884.97 00:10:56.154 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x80 00:10:56.154 Malloc1p0 : 5.56 187.05 11.69 0.00 0.00 658910.02 1802.24 1159153.11 00:10:56.154 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x80 length 0x80 00:10:56.154 Malloc1p0 : 5.63 117.33 7.33 0.00 0.00 1037428.03 1608.61 1654843.58 00:10:56.154 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x80 00:10:56.154 Malloc1p1 : 5.70 75.72 4.73 0.00 0.00 1591266.60 848.99 2348810.24 00:10:56.154 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x80 length 0x80 00:10:56.154 Malloc1p1 : 5.83 76.81 4.80 0.00 0.00 1547332.89 990.49 2303054.20 00:10:56.154 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x20 00:10:56.154 Malloc2p0 : 5.53 60.81 3.80 0.00 0.00 494787.70 547.37 838860.80 00:10:56.154 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x20 length 0x20 00:10:56.154 Malloc2p0 : 5.53 63.62 3.98 0.00 0.00 472298.62 517.59 838860.80 00:10:56.154 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x20 00:10:56.154 Malloc2p1 : 5.53 60.80 3.80 0.00 0.00 492980.48 398.43 827421.79 00:10:56.154 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x20 length 0x20 00:10:56.154 Malloc2p1 : 5.53 63.61 3.98 0.00 0.00 470168.25 569.72 823608.79 00:10:56.154 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x20 00:10:56.154 Malloc2p2 : 5.53 60.80 3.80 0.00 0.00 490986.44 389.12 815982.78 00:10:56.154 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x20 length 0x20 00:10:56.154 Malloc2p2 : 5.53 63.61 3.98 0.00 0.00 468085.84 536.20 812169.77 00:10:56.154 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x20 00:10:56.154 Malloc2p3 : 5.56 63.30 3.96 0.00 0.00 472448.02 439.39 800730.76 00:10:56.154 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x20 length 0x20 00:10:56.154 Malloc2p3 : 5.53 63.60 3.98 0.00 0.00 466074.19 618.12 796917.76 00:10:56.154 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x20 00:10:56.154 Malloc2p4 : 5.56 63.29 3.96 0.00 0.00 470502.99 525.03 789291.75 00:10:56.154 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x20 length 0x20 00:10:56.154 Malloc2p4 : 5.57 66.06 4.13 0.00 0.00 448761.94 603.23 781665.75 00:10:56.154 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x20 00:10:56.154 Malloc2p5 : 5.56 63.28 3.96 0.00 0.00 468457.21 517.59 774039.74 00:10:56.154 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x20 length 0x20 00:10:56.154 Malloc2p5 : 5.57 66.05 4.13 0.00 0.00 446790.53 506.41 770226.73 00:10:56.154 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x20 00:10:56.154 Malloc2p6 : 5.56 63.28 3.95 0.00 0.00 466467.72 387.26 762600.73 00:10:56.154 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x20 length 0x20 00:10:56.154 Malloc2p6 : 5.57 66.05 4.13 0.00 0.00 444655.43 517.59 754974.72 00:10:56.154 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x0 length 0x20 00:10:56.154 Malloc2p7 : 5.56 63.27 3.95 0.00 0.00 464543.17 402.15 751161.72 00:10:56.154 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:56.154 Verification LBA range: start 0x20 length 0x20 00:10:56.155 Malloc2p7 : 5.57 66.04 4.13 0.00 0.00 442675.05 498.97 739722.71 00:10:56.155 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x0 length 0x100 00:10:56.155 TestPT : 5.74 75.63 4.73 0.00 0.00 1519629.11 45279.42 1982761.89 00:10:56.155 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x100 length 0x100 00:10:56.155 TestPT : 5.85 76.64 4.79 0.00 0.00 1481497.13 35031.97 1937005.85 00:10:56.155 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x0 length 0x200 00:10:56.155 raid0 : 5.68 81.63 5.10 0.00 0.00 1394369.81 860.16 2104778.01 00:10:56.155 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x200 length 0x200 00:10:56.155 raid0 : 5.83 85.01 5.31 0.00 0.00 1322140.11 904.84 2028517.93 00:10:56.155 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x0 length 0x200 00:10:56.155 concat0 : 5.71 94.09 5.88 0.00 0.00 1198724.12 912.29 2028517.93 00:10:56.155 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x200 length 0x200 00:10:56.155 concat0 : 5.80 95.34 5.96 0.00 0.00 1174346.08 916.01 1967509.88 00:10:56.155 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x0 length 0x100 00:10:56.155 raid1 : 5.77 99.83 6.24 0.00 0.00 1115070.84 1064.96 1944631.85 00:10:56.155 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x100 length 0x100 00:10:56.155 raid1 : 5.85 119.89 7.49 0.00 0.00 923217.25 1027.72 1883623.80 00:10:56.155 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x0 length 0x4e 00:10:56.155 AIO0 : 5.81 111.72 6.98 0.00 0.00 600721.38 1042.62 1159153.11 00:10:56.155 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:10:56.155 Verification LBA range: start 0x4e length 0x4e 00:10:56.155 AIO0 : 5.85 111.13 6.95 0.00 0.00 599406.46 459.87 1113397.06 00:10:56.155 =================================================================================================================== 00:10:56.155 Total : 3330.15 208.13 0.00 0.00 684829.67 348.16 2348810.24 00:10:58.057 00:10:58.057 real 0m9.962s 00:10:58.057 user 0m18.204s 00:10:58.057 sys 0m0.573s 00:10:58.057 23:26:21 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:58.057 ************************************ 00:10:58.057 END TEST bdev_verify_big_io 00:10:58.057 ************************************ 00:10:58.057 23:26:21 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:58.057 23:26:21 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:58.057 23:26:21 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:58.058 23:26:21 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:58.058 23:26:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:58.058 ************************************ 00:10:58.058 START TEST bdev_write_zeroes 00:10:58.058 ************************************ 00:10:58.058 23:26:21 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:58.058 [2024-05-14 23:26:21.230414] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:10:58.058 [2024-05-14 23:26:21.230614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51708 ] 00:10:58.316 [2024-05-14 23:26:21.380034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.316 [2024-05-14 23:26:21.594679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.883 [2024-05-14 23:26:22.023391] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:58.883 [2024-05-14 23:26:22.023511] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:58.883 [2024-05-14 23:26:22.031336] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:58.883 [2024-05-14 23:26:22.031388] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:58.883 [2024-05-14 23:26:22.039361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:58.883 [2024-05-14 23:26:22.039408] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:58.883 [2024-05-14 23:26:22.039450] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:59.141 [2024-05-14 23:26:22.212996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:59.141 [2024-05-14 23:26:22.213130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.141 [2024-05-14 23:26:22.213179] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002bb80 00:10:59.141 [2024-05-14 23:26:22.213226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.141 [2024-05-14 23:26:22.214894] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.141 [2024-05-14 23:26:22.214954] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:59.398 Running I/O for 1 seconds... 00:11:00.334 00:11:00.334 Latency(us) 00:11:00.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.334 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc0 : 1.01 13402.51 52.35 0.00 0.00 9545.94 310.92 18469.24 00:11:00.334 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc1p0 : 1.01 13398.34 52.34 0.00 0.00 9542.66 389.12 17992.61 00:11:00.334 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc1p1 : 1.01 13394.26 52.32 0.00 0.00 9534.73 413.32 17635.14 00:11:00.334 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc2p0 : 1.01 13390.36 52.31 0.00 0.00 9530.78 418.91 17158.52 00:11:00.334 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc2p1 : 1.01 13386.44 52.29 0.00 0.00 9525.34 390.98 16801.05 00:11:00.334 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc2p2 : 1.01 13382.53 52.28 0.00 0.00 9520.61 385.40 16443.58 00:11:00.334 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc2p3 : 1.02 13411.92 52.39 0.00 0.00 9489.87 415.19 15966.95 00:11:00.334 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc2p4 : 1.02 13407.93 52.37 0.00 0.00 9484.09 392.84 15609.48 00:11:00.334 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc2p5 : 1.02 13404.14 52.36 0.00 0.00 9478.42 441.25 15073.28 00:11:00.334 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc2p6 : 1.02 13400.24 52.34 0.00 0.00 9471.82 422.63 14656.23 00:11:00.334 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 Malloc2p7 : 1.02 13396.38 52.33 0.00 0.00 9465.46 426.36 14179.61 00:11:00.334 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 TestPT : 1.02 13392.49 52.31 0.00 0.00 9459.35 452.42 13762.56 00:11:00.334 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 raid0 : 1.02 13387.35 52.29 0.00 0.00 9451.23 733.56 12868.89 00:11:00.334 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 concat0 : 1.02 13382.25 52.27 0.00 0.00 9439.16 700.04 11975.21 00:11:00.334 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 raid1 : 1.02 13375.50 52.25 0.00 0.00 9427.67 1161.77 10783.65 00:11:00.334 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:00.334 AIO0 : 1.02 13366.53 52.21 0.00 0.00 9412.27 863.88 10783.65 00:11:00.334 =================================================================================================================== 00:11:00.334 Total : 214279.17 837.03 0.00 0.00 9486.05 310.92 18469.24 00:11:02.886 00:11:02.886 real 0m4.525s 00:11:02.886 user 0m3.855s 00:11:02.886 sys 0m0.455s 00:11:02.886 ************************************ 00:11:02.886 END TEST bdev_write_zeroes 00:11:02.886 ************************************ 00:11:02.886 23:26:25 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:02.886 23:26:25 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:02.886 23:26:25 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:02.886 23:26:25 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:02.886 23:26:25 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.886 23:26:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.886 ************************************ 00:11:02.886 START TEST bdev_json_nonenclosed 00:11:02.886 ************************************ 00:11:02.886 23:26:25 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:02.886 [2024-05-14 23:26:25.807100] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:02.887 [2024-05-14 23:26:25.807355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51791 ] 00:11:02.887 [2024-05-14 23:26:25.964733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.887 [2024-05-14 23:26:26.169481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.887 [2024-05-14 23:26:26.169634] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:02.887 [2024-05-14 23:26:26.169672] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:02.887 [2024-05-14 23:26:26.169694] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:03.454 00:11:03.454 real 0m0.849s 00:11:03.454 user 0m0.534s 00:11:03.454 sys 0m0.117s 00:11:03.454 23:26:26 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:03.454 ************************************ 00:11:03.454 END TEST bdev_json_nonenclosed 00:11:03.454 ************************************ 00:11:03.454 23:26:26 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:03.454 23:26:26 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.454 23:26:26 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:03.454 23:26:26 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:03.454 23:26:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.454 ************************************ 00:11:03.454 START TEST bdev_json_nonarray 00:11:03.454 ************************************ 00:11:03.454 23:26:26 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.454 [2024-05-14 23:26:26.705707] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:03.454 [2024-05-14 23:26:26.705927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51820 ] 00:11:03.712 [2024-05-14 23:26:26.855190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.971 [2024-05-14 23:26:27.060948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.971 [2024-05-14 23:26:27.061129] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:03.971 [2024-05-14 23:26:27.061195] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:03.971 [2024-05-14 23:26:27.061220] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.230 00:11:04.230 real 0m0.849s 00:11:04.230 user 0m0.543s 00:11:04.230 sys 0m0.110s 00:11:04.230 23:26:27 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.230 23:26:27 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:04.230 ************************************ 00:11:04.230 END TEST bdev_json_nonarray 00:11:04.230 ************************************ 00:11:04.230 23:26:27 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:11:04.230 23:26:27 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:11:04.230 23:26:27 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:04.230 23:26:27 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.230 23:26:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.230 ************************************ 00:11:04.230 START TEST bdev_qos 00:11:04.230 ************************************ 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- common/autotest_common.sh@1121 -- # qos_test_suite '' 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=51859 00:11:04.230 Process qos testing pid: 51859 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 51859' 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 51859 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # '[' -z 51859 ']' 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:04.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:04.230 23:26:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:04.489 [2024-05-14 23:26:27.602675] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:04.489 [2024-05-14 23:26:27.602852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51859 ] 00:11:04.489 [2024-05-14 23:26:27.758887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.056 [2024-05-14 23:26:28.054527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # return 0 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:05.343 Malloc_0 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_0 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:05.343 [ 00:11:05.343 { 00:11:05.343 "name": "Malloc_0", 00:11:05.343 "aliases": [ 00:11:05.343 "67ebafdd-5dba-4896-a39a-6968b2a10555" 00:11:05.343 ], 00:11:05.343 "product_name": "Malloc disk", 00:11:05.343 "block_size": 512, 00:11:05.343 "num_blocks": 262144, 00:11:05.343 "uuid": "67ebafdd-5dba-4896-a39a-6968b2a10555", 00:11:05.343 "assigned_rate_limits": { 00:11:05.343 "rw_ios_per_sec": 0, 00:11:05.343 "rw_mbytes_per_sec": 0, 00:11:05.343 "r_mbytes_per_sec": 0, 00:11:05.343 "w_mbytes_per_sec": 0 00:11:05.343 }, 00:11:05.343 "claimed": false, 00:11:05.343 "zoned": false, 00:11:05.343 "supported_io_types": { 00:11:05.343 "read": true, 00:11:05.343 "write": true, 00:11:05.343 "unmap": true, 00:11:05.343 "write_zeroes": true, 00:11:05.343 "flush": true, 00:11:05.343 "reset": true, 00:11:05.343 "compare": false, 00:11:05.343 "compare_and_write": false, 00:11:05.343 "abort": true, 00:11:05.343 "nvme_admin": false, 00:11:05.343 "nvme_io": false 00:11:05.343 }, 00:11:05.343 "memory_domains": [ 00:11:05.343 { 00:11:05.343 "dma_device_id": "system", 00:11:05.343 "dma_device_type": 1 00:11:05.343 }, 00:11:05.343 { 00:11:05.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.343 "dma_device_type": 2 00:11:05.343 } 00:11:05.343 ], 00:11:05.343 "driver_specific": {} 00:11:05.343 } 00:11:05.343 ] 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:05.343 Null_1 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Null_1 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.343 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:05.343 [ 00:11:05.343 { 00:11:05.343 "name": "Null_1", 00:11:05.343 "aliases": [ 00:11:05.343 "0044e61f-8536-4448-b492-1fb54d6800f1" 00:11:05.343 ], 00:11:05.343 "product_name": "Null disk", 00:11:05.343 "block_size": 512, 00:11:05.343 "num_blocks": 262144, 00:11:05.343 "uuid": "0044e61f-8536-4448-b492-1fb54d6800f1", 00:11:05.343 "assigned_rate_limits": { 00:11:05.343 "rw_ios_per_sec": 0, 00:11:05.343 "rw_mbytes_per_sec": 0, 00:11:05.343 "r_mbytes_per_sec": 0, 00:11:05.343 "w_mbytes_per_sec": 0 00:11:05.343 }, 00:11:05.343 "claimed": false, 00:11:05.343 "zoned": false, 00:11:05.343 "supported_io_types": { 00:11:05.343 "read": true, 00:11:05.343 "write": true, 00:11:05.343 "unmap": false, 00:11:05.343 "write_zeroes": true, 00:11:05.343 "flush": false, 00:11:05.343 "reset": true, 00:11:05.343 "compare": false, 00:11:05.343 "compare_and_write": false, 00:11:05.343 "abort": true, 00:11:05.343 "nvme_admin": false, 00:11:05.343 "nvme_io": false 00:11:05.343 }, 00:11:05.343 "driver_specific": {} 00:11:05.616 } 00:11:05.616 ] 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:11:05.616 23:26:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:11:05.616 Running I/O for 60 seconds... 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 161589.43 646357.72 0.00 0.00 653312.00 0.00 0.00 ' 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=161589.43 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 161589 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=161589 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=40000 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 40000 -gt 1000 ']' 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 40000 Malloc_0 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 40000 IOPS Malloc_0 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:10.883 23:26:33 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:10.883 ************************************ 00:11:10.883 START TEST bdev_qos_iops 00:11:10.883 ************************************ 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1121 -- # run_qos_test 40000 IOPS Malloc_0 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=40000 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:11:10.883 23:26:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:11:16.151 23:26:38 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 40041.41 160165.62 0.00 0.00 162080.00 0.00 0.00 ' 00:11:16.151 23:26:38 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:11:16.151 23:26:38 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=40041.41 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 40041 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=40041 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=36000 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=44000 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 40041 -lt 36000 ']' 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 40041 -gt 44000 ']' 00:11:16.151 00:11:16.151 real 0m5.191s 00:11:16.151 user 0m0.118s 00:11:16.151 sys 0m0.030s 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:16.151 23:26:39 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:11:16.151 ************************************ 00:11:16.151 END TEST bdev_qos_iops 00:11:16.151 ************************************ 00:11:16.151 23:26:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:11:16.151 23:26:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:11:16.151 23:26:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:11:16.151 23:26:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:16.151 23:26:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:16.151 23:26:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:11:16.151 23:26:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 46478.92 185915.68 0.00 0.00 188416.00 0.00 0.00 ' 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=188416.00 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 188416 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=188416 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=18 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 18 -lt 2 ']' 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 18 Null_1 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 18 BANDWIDTH Null_1 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.427 23:26:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:21.427 ************************************ 00:11:21.427 START TEST bdev_qos_bw 00:11:21.427 ************************************ 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1121 -- # run_qos_test 18 BANDWIDTH Null_1 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=18 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:11:21.427 23:26:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 4605.86 18423.46 0.00 0.00 18616.00 0.00 0.00 ' 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=18616.00 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 18616 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=18616 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=18432 00:11:26.712 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=16588 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=20275 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 18616 -lt 16588 ']' 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 18616 -gt 20275 ']' 00:11:26.713 00:11:26.713 real 0m5.207s 00:11:26.713 user 0m0.124s 00:11:26.713 sys 0m0.031s 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:11:26.713 ************************************ 00:11:26.713 END TEST bdev_qos_bw 00:11:26.713 ************************************ 00:11:26.713 23:26:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:11:26.713 23:26:49 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.713 23:26:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:26.713 23:26:49 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.713 23:26:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:11:26.713 23:26:49 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:26.713 23:26:49 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.713 23:26:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:26.713 ************************************ 00:11:26.713 START TEST bdev_qos_ro_bw 00:11:26.713 ************************************ 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1121 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:11:26.713 23:26:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 510.98 2043.91 0.00 0.00 2064.00 0.00 0.00 ' 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2064.00 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2064 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2064 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2064 -lt 1843 ']' 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2064 -gt 2252 ']' 00:11:31.977 00:11:31.977 real 0m5.161s 00:11:31.977 user 0m0.101s 00:11:31.977 sys 0m0.025s 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:31.977 ************************************ 00:11:31.977 END TEST bdev_qos_ro_bw 00:11:31.977 ************************************ 00:11:31.977 23:26:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:11:31.977 23:26:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:11:31.977 23:26:54 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.977 23:26:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:32.235 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.235 23:26:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:11:32.235 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.235 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:32.493 00:11:32.493 Latency(us) 00:11:32.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.493 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:32.493 Malloc_0 : 26.64 55096.10 215.22 0.00 0.00 4604.04 953.25 503316.48 00:11:32.493 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:32.493 Null_1 : 26.80 52054.85 203.34 0.00 0.00 4910.55 342.57 151566.89 00:11:32.493 =================================================================================================================== 00:11:32.493 Total : 107150.95 418.56 0.00 0.00 4753.40 342.57 503316.48 00:11:32.493 0 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 51859 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # '[' -z 51859 ']' 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # kill -0 51859 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # uname 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 51859 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:32.493 killing process with pid 51859 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51859' 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@965 -- # kill 51859 00:11:32.493 23:26:55 blockdev_general.bdev_qos -- common/autotest_common.sh@970 -- # wait 51859 00:11:32.493 Received shutdown signal, test time was about 26.825237 seconds 00:11:32.493 00:11:32.493 Latency(us) 00:11:32.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.493 =================================================================================================================== 00:11:32.493 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:33.870 ************************************ 00:11:33.870 END TEST bdev_qos 00:11:33.870 ************************************ 00:11:33.870 23:26:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:11:33.870 00:11:33.870 real 0m29.363s 00:11:33.870 user 0m29.764s 00:11:33.870 sys 0m0.750s 00:11:33.870 23:26:56 blockdev_general.bdev_qos -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:33.870 23:26:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:11:33.870 23:26:56 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:11:33.870 23:26:56 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:33.870 23:26:56 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:33.870 23:26:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:33.870 ************************************ 00:11:33.870 START TEST bdev_qd_sampling 00:11:33.870 ************************************ 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1121 -- # qd_sampling_test_suite '' 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=52342 00:11:33.870 Process bdev QD sampling period testing pid: 52342 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 52342' 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 52342 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # '[' -z 52342 ']' 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:33.870 23:26:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:11:33.870 [2024-05-14 23:26:57.016228] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:33.870 [2024-05-14 23:26:57.016511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52342 ] 00:11:34.129 [2024-05-14 23:26:57.182143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:34.388 [2024-05-14 23:26:57.425286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.388 [2024-05-14 23:26:57.425289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.647 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:34.647 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # return 0 00:11:34.647 23:26:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:11:34.647 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.647 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:34.906 Malloc_QD 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_QD 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local i 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:34.906 [ 00:11:34.906 { 00:11:34.906 "name": "Malloc_QD", 00:11:34.906 "aliases": [ 00:11:34.906 "464b27a9-95ea-479f-b9fd-754d67606066" 00:11:34.906 ], 00:11:34.906 "product_name": "Malloc disk", 00:11:34.906 "block_size": 512, 00:11:34.906 "num_blocks": 262144, 00:11:34.906 "uuid": "464b27a9-95ea-479f-b9fd-754d67606066", 00:11:34.906 "assigned_rate_limits": { 00:11:34.906 "rw_ios_per_sec": 0, 00:11:34.906 "rw_mbytes_per_sec": 0, 00:11:34.906 "r_mbytes_per_sec": 0, 00:11:34.906 "w_mbytes_per_sec": 0 00:11:34.906 }, 00:11:34.906 "claimed": false, 00:11:34.906 "zoned": false, 00:11:34.906 "supported_io_types": { 00:11:34.906 "read": true, 00:11:34.906 "write": true, 00:11:34.906 "unmap": true, 00:11:34.906 "write_zeroes": true, 00:11:34.906 "flush": true, 00:11:34.906 "reset": true, 00:11:34.906 "compare": false, 00:11:34.906 "compare_and_write": false, 00:11:34.906 "abort": true, 00:11:34.906 "nvme_admin": false, 00:11:34.906 "nvme_io": false 00:11:34.906 }, 00:11:34.906 "memory_domains": [ 00:11:34.906 { 00:11:34.906 "dma_device_id": "system", 00:11:34.906 "dma_device_type": 1 00:11:34.906 }, 00:11:34.906 { 00:11:34.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.906 "dma_device_type": 2 00:11:34.906 } 00:11:34.906 ], 00:11:34.906 "driver_specific": {} 00:11:34.906 } 00:11:34.906 ] 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # return 0 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:11:34.906 23:26:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:34.906 Running I/O for 5 seconds... 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.860 23:26:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:11:36.860 "tick_rate": 2200000000, 00:11:36.860 "ticks": 1421785125826, 00:11:36.860 "bdevs": [ 00:11:36.860 { 00:11:36.860 "name": "Malloc_QD", 00:11:36.860 "bytes_read": 1828753920, 00:11:36.860 "num_read_ops": 446467, 00:11:36.860 "bytes_written": 0, 00:11:36.860 "num_write_ops": 0, 00:11:36.860 "bytes_unmapped": 0, 00:11:36.860 "num_unmap_ops": 0, 00:11:36.860 "bytes_copied": 0, 00:11:36.860 "num_copy_ops": 0, 00:11:36.860 "read_latency_ticks": 2131955988480, 00:11:36.860 "max_read_latency_ticks": 9324003, 00:11:36.860 "min_read_latency_ticks": 260911, 00:11:36.860 "write_latency_ticks": 0, 00:11:36.860 "max_write_latency_ticks": 0, 00:11:36.860 "min_write_latency_ticks": 0, 00:11:36.860 "unmap_latency_ticks": 0, 00:11:36.860 "max_unmap_latency_ticks": 0, 00:11:36.861 "min_unmap_latency_ticks": 0, 00:11:36.861 "copy_latency_ticks": 0, 00:11:36.861 "max_copy_latency_ticks": 0, 00:11:36.861 "min_copy_latency_ticks": 0, 00:11:36.861 "io_error": {}, 00:11:36.861 "queue_depth_polling_period": 10, 00:11:36.861 "queue_depth": 512, 00:11:36.861 "io_time": 40, 00:11:36.861 "weighted_io_time": 20480 00:11:36.861 } 00:11:36.861 ] 00:11:36.861 }' 00:11:36.861 23:26:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:11:36.861 23:27:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:11:36.861 23:27:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:11:36.861 23:27:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:11:36.861 23:27:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:11:36.861 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.861 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:36.861 00:11:36.861 Latency(us) 00:11:36.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.861 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:11:36.861 Malloc_QD : 1.98 118579.88 463.20 0.00 0.00 2155.29 465.45 3410.85 00:11:36.861 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:36.861 Malloc_QD : 1.98 116881.93 456.57 0.00 0.00 2186.40 286.72 4259.84 00:11:36.861 =================================================================================================================== 00:11:36.861 Total : 235461.81 919.77 0.00 0.00 2170.73 286.72 4259.84 00:11:37.120 0 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 52342 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # '[' -z 52342 ']' 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # kill -0 52342 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # uname 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52342 00:11:37.120 killing process with pid 52342 00:11:37.120 Received shutdown signal, test time was about 2.115679 seconds 00:11:37.120 00:11:37.120 Latency(us) 00:11:37.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.120 =================================================================================================================== 00:11:37.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52342' 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@965 -- # kill 52342 00:11:37.120 23:27:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@970 -- # wait 52342 00:11:38.495 ************************************ 00:11:38.495 END TEST bdev_qd_sampling 00:11:38.495 ************************************ 00:11:38.495 23:27:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:11:38.495 00:11:38.495 real 0m4.823s 00:11:38.495 user 0m8.710s 00:11:38.495 sys 0m0.406s 00:11:38.495 23:27:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.495 23:27:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 23:27:01 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:11:38.495 23:27:01 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:38.495 23:27:01 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:38.495 23:27:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:38.495 ************************************ 00:11:38.495 START TEST bdev_error 00:11:38.495 ************************************ 00:11:38.495 23:27:01 blockdev_general.bdev_error -- common/autotest_common.sh@1121 -- # error_test_suite '' 00:11:38.495 23:27:01 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:11:38.495 23:27:01 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:11:38.495 23:27:01 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:11:38.495 23:27:01 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=52436 00:11:38.495 Process error testing pid: 52436 00:11:38.495 23:27:01 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 52436' 00:11:38.495 23:27:01 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 52436 00:11:38.495 23:27:01 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:11:38.495 23:27:01 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 52436 ']' 00:11:38.495 23:27:01 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.495 23:27:01 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:38.495 23:27:01 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.495 23:27:01 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:38.495 23:27:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:38.758 [2024-05-14 23:27:01.884099] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:38.758 [2024-05-14 23:27:01.884317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52436 ] 00:11:38.758 [2024-05-14 23:27:02.041989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.015 [2024-05-14 23:27:02.251999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:11:39.589 23:27:02 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:39.589 Dev_1 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.589 23:27:02 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:39.589 [ 00:11:39.589 { 00:11:39.589 "name": "Dev_1", 00:11:39.589 "aliases": [ 00:11:39.589 "02cebb28-7145-4a59-83a2-191d510b0b8a" 00:11:39.589 ], 00:11:39.589 "product_name": "Malloc disk", 00:11:39.589 "block_size": 512, 00:11:39.589 "num_blocks": 262144, 00:11:39.589 "uuid": "02cebb28-7145-4a59-83a2-191d510b0b8a", 00:11:39.589 "assigned_rate_limits": { 00:11:39.589 "rw_ios_per_sec": 0, 00:11:39.589 "rw_mbytes_per_sec": 0, 00:11:39.589 "r_mbytes_per_sec": 0, 00:11:39.589 "w_mbytes_per_sec": 0 00:11:39.589 }, 00:11:39.589 "claimed": false, 00:11:39.589 "zoned": false, 00:11:39.589 "supported_io_types": { 00:11:39.589 "read": true, 00:11:39.589 "write": true, 00:11:39.589 "unmap": true, 00:11:39.589 "write_zeroes": true, 00:11:39.589 "flush": true, 00:11:39.589 "reset": true, 00:11:39.589 "compare": false, 00:11:39.589 "compare_and_write": false, 00:11:39.589 "abort": true, 00:11:39.589 "nvme_admin": false, 00:11:39.589 "nvme_io": false 00:11:39.589 }, 00:11:39.589 "memory_domains": [ 00:11:39.589 { 00:11:39.589 "dma_device_id": "system", 00:11:39.589 "dma_device_type": 1 00:11:39.589 }, 00:11:39.589 { 00:11:39.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.589 "dma_device_type": 2 00:11:39.589 } 00:11:39.589 ], 00:11:39.589 "driver_specific": {} 00:11:39.589 } 00:11:39.589 ] 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:11:39.589 23:27:02 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:39.589 true 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.589 23:27:02 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.589 23:27:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:39.849 Dev_2 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.849 23:27:03 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:39.849 [ 00:11:39.849 { 00:11:39.849 "name": "Dev_2", 00:11:39.849 "aliases": [ 00:11:39.849 "68e2fd8c-8818-453d-8b26-5c3e147e86b5" 00:11:39.849 ], 00:11:39.849 "product_name": "Malloc disk", 00:11:39.849 "block_size": 512, 00:11:39.849 "num_blocks": 262144, 00:11:39.849 "uuid": "68e2fd8c-8818-453d-8b26-5c3e147e86b5", 00:11:39.849 "assigned_rate_limits": { 00:11:39.849 "rw_ios_per_sec": 0, 00:11:39.849 "rw_mbytes_per_sec": 0, 00:11:39.849 "r_mbytes_per_sec": 0, 00:11:39.849 "w_mbytes_per_sec": 0 00:11:39.849 }, 00:11:39.849 "claimed": false, 00:11:39.849 "zoned": false, 00:11:39.849 "supported_io_types": { 00:11:39.849 "read": true, 00:11:39.849 "write": true, 00:11:39.849 "unmap": true, 00:11:39.849 "write_zeroes": true, 00:11:39.849 "flush": true, 00:11:39.849 "reset": true, 00:11:39.849 "compare": false, 00:11:39.849 "compare_and_write": false, 00:11:39.849 "abort": true, 00:11:39.849 "nvme_admin": false, 00:11:39.849 "nvme_io": false 00:11:39.849 }, 00:11:39.849 "memory_domains": [ 00:11:39.849 { 00:11:39.849 "dma_device_id": "system", 00:11:39.849 "dma_device_type": 1 00:11:39.849 }, 00:11:39.849 { 00:11:39.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.849 "dma_device_type": 2 00:11:39.849 } 00:11:39.849 ], 00:11:39.849 "driver_specific": {} 00:11:39.849 } 00:11:39.849 ] 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:11:39.849 23:27:03 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:39.849 23:27:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.849 23:27:03 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:11:39.849 23:27:03 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:11:39.849 Running I/O for 5 seconds... 00:11:40.781 Process is existed as continue on error is set. Pid: 52436 00:11:40.781 23:27:04 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 52436 00:11:40.781 23:27:04 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 52436' 00:11:40.781 23:27:04 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:11:40.781 23:27:04 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.781 23:27:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:40.782 23:27:04 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.782 23:27:04 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:11:40.782 23:27:04 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.782 23:27:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:41.038 Timeout while waiting for response: 00:11:41.038 00:11:41.038 00:11:41.296 23:27:04 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.296 23:27:04 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:11:45.486 00:11:45.486 Latency(us) 00:11:45.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.486 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:45.486 EE_Dev_1 : 0.91 93834.89 366.54 5.51 0.00 169.22 69.35 551.10 00:11:45.486 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:45.486 Dev_2 : 5.00 196574.71 767.87 0.00 0.00 80.20 22.34 310759.80 00:11:45.486 =================================================================================================================== 00:11:45.486 Total : 290409.60 1134.41 5.51 0.00 87.30 22.34 310759.80 00:11:46.418 23:27:09 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 52436 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # '[' -z 52436 ']' 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # kill -0 52436 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # uname 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52436 00:11:46.418 killing process with pid 52436 00:11:46.418 Received shutdown signal, test time was about 5.000000 seconds 00:11:46.418 00:11:46.418 Latency(us) 00:11:46.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.418 =================================================================================================================== 00:11:46.418 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52436' 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@965 -- # kill 52436 00:11:46.418 23:27:09 blockdev_general.bdev_error -- common/autotest_common.sh@970 -- # wait 52436 00:11:47.792 Process error testing pid: 52560 00:11:47.792 23:27:10 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=52560 00:11:47.792 23:27:10 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 52560' 00:11:47.792 23:27:10 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 52560 00:11:47.792 23:27:10 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:11:47.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.792 23:27:10 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 52560 ']' 00:11:47.792 23:27:10 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.792 23:27:10 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:47.792 23:27:10 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.792 23:27:10 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:47.792 23:27:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:47.792 [2024-05-14 23:27:11.021519] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:47.792 [2024-05-14 23:27:11.021742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52560 ] 00:11:48.050 [2024-05-14 23:27:11.189862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.308 [2024-05-14 23:27:11.396774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.566 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:48.566 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:11:48.566 23:27:11 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:11:48.566 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.566 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:48.824 Dev_1 00:11:48.824 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.824 23:27:11 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:11:48.824 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:11:48.824 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:48.824 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:48.825 [ 00:11:48.825 { 00:11:48.825 "name": "Dev_1", 00:11:48.825 "aliases": [ 00:11:48.825 "32b8ae63-8a49-4e60-961d-6d3ee3a64ad2" 00:11:48.825 ], 00:11:48.825 "product_name": "Malloc disk", 00:11:48.825 "block_size": 512, 00:11:48.825 "num_blocks": 262144, 00:11:48.825 "uuid": "32b8ae63-8a49-4e60-961d-6d3ee3a64ad2", 00:11:48.825 "assigned_rate_limits": { 00:11:48.825 "rw_ios_per_sec": 0, 00:11:48.825 "rw_mbytes_per_sec": 0, 00:11:48.825 "r_mbytes_per_sec": 0, 00:11:48.825 "w_mbytes_per_sec": 0 00:11:48.825 }, 00:11:48.825 "claimed": false, 00:11:48.825 "zoned": false, 00:11:48.825 "supported_io_types": { 00:11:48.825 "read": true, 00:11:48.825 "write": true, 00:11:48.825 "unmap": true, 00:11:48.825 "write_zeroes": true, 00:11:48.825 "flush": true, 00:11:48.825 "reset": true, 00:11:48.825 "compare": false, 00:11:48.825 "compare_and_write": false, 00:11:48.825 "abort": true, 00:11:48.825 "nvme_admin": false, 00:11:48.825 "nvme_io": false 00:11:48.825 }, 00:11:48.825 "memory_domains": [ 00:11:48.825 { 00:11:48.825 "dma_device_id": "system", 00:11:48.825 "dma_device_type": 1 00:11:48.825 }, 00:11:48.825 { 00:11:48.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.825 "dma_device_type": 2 00:11:48.825 } 00:11:48.825 ], 00:11:48.825 "driver_specific": {} 00:11:48.825 } 00:11:48.825 ] 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:11:48.825 23:27:11 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:48.825 true 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.825 23:27:11 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.825 23:27:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:49.084 Dev_2 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.084 23:27:12 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:49.084 [ 00:11:49.084 { 00:11:49.084 "name": "Dev_2", 00:11:49.084 "aliases": [ 00:11:49.084 "696be228-6ca8-43cb-b4c3-d25d9c97bf51" 00:11:49.084 ], 00:11:49.084 "product_name": "Malloc disk", 00:11:49.084 "block_size": 512, 00:11:49.084 "num_blocks": 262144, 00:11:49.084 "uuid": "696be228-6ca8-43cb-b4c3-d25d9c97bf51", 00:11:49.084 "assigned_rate_limits": { 00:11:49.084 "rw_ios_per_sec": 0, 00:11:49.084 "rw_mbytes_per_sec": 0, 00:11:49.084 "r_mbytes_per_sec": 0, 00:11:49.084 "w_mbytes_per_sec": 0 00:11:49.084 }, 00:11:49.084 "claimed": false, 00:11:49.084 "zoned": false, 00:11:49.084 "supported_io_types": { 00:11:49.084 "read": true, 00:11:49.084 "write": true, 00:11:49.084 "unmap": true, 00:11:49.084 "write_zeroes": true, 00:11:49.084 "flush": true, 00:11:49.084 "reset": true, 00:11:49.084 "compare": false, 00:11:49.084 "compare_and_write": false, 00:11:49.084 "abort": true, 00:11:49.084 "nvme_admin": false, 00:11:49.084 "nvme_io": false 00:11:49.084 }, 00:11:49.084 "memory_domains": [ 00:11:49.084 { 00:11:49.084 "dma_device_id": "system", 00:11:49.084 "dma_device_type": 1 00:11:49.084 }, 00:11:49.084 { 00:11:49.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.084 "dma_device_type": 2 00:11:49.084 } 00:11:49.084 ], 00:11:49.084 "driver_specific": {} 00:11:49.084 } 00:11:49.084 ] 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:11:49.084 23:27:12 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.084 23:27:12 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 52560 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 52560 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:11:49.084 23:27:12 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.084 23:27:12 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 52560 00:11:49.084 Running I/O for 5 seconds... 00:11:49.084 task offset: 208384 on job bdev=EE_Dev_1 fails 00:11:49.084 00:11:49.084 Latency(us) 00:11:49.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.084 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:49.084 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:11:49.084 EE_Dev_1 : 0.00 57441.25 224.38 13054.83 0.00 182.50 68.42 333.27 00:11:49.084 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:49.084 Dev_2 : 0.00 59479.55 232.34 0.00 0.00 143.24 66.09 233.66 00:11:49.084 =================================================================================================================== 00:11:49.084 Total : 116920.81 456.72 13054.83 0.00 161.21 66.09 333.27 00:11:49.084 [2024-05-14 23:27:12.276544] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:49.342 request: 00:11:49.342 { 00:11:49.342 "method": "perform_tests", 00:11:49.342 "req_id": 1 00:11:49.342 } 00:11:49.342 Got JSON-RPC error response 00:11:49.342 response: 00:11:49.342 { 00:11:49.342 "code": -32603, 00:11:49.342 "message": "bdevperf failed with error Operation not permitted" 00:11:49.342 } 00:11:51.240 23:27:14 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:11:51.240 23:27:14 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:51.240 23:27:14 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:11:51.240 23:27:14 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:11:51.240 23:27:14 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:11:51.240 ************************************ 00:11:51.240 END TEST bdev_error 00:11:51.240 ************************************ 00:11:51.240 23:27:14 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:51.240 00:11:51.240 real 0m12.329s 00:11:51.240 user 0m12.244s 00:11:51.240 sys 0m0.858s 00:11:51.240 23:27:14 blockdev_general.bdev_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:51.240 23:27:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:51.240 23:27:14 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:11:51.240 23:27:14 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:51.240 23:27:14 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:51.240 23:27:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:51.240 ************************************ 00:11:51.240 START TEST bdev_stat 00:11:51.240 ************************************ 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- common/autotest_common.sh@1121 -- # stat_test_suite '' 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:11:51.240 Process Bdev IO statistics testing pid: 52627 00:11:51.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=52627 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 52627' 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 52627 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # '[' -z 52627 ']' 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:51.240 23:27:14 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:51.240 [2024-05-14 23:27:14.267143] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:51.240 [2024-05-14 23:27:14.267384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52627 ] 00:11:51.240 [2024-05-14 23:27:14.420907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:51.498 [2024-05-14 23:27:14.628664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.498 [2024-05-14 23:27:14.628672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # return 0 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:52.090 Malloc_STAT 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_STAT 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local i 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:52.090 [ 00:11:52.090 { 00:11:52.090 "name": "Malloc_STAT", 00:11:52.090 "aliases": [ 00:11:52.090 "57299709-5bcd-4dc3-9848-a9092dbd3a92" 00:11:52.090 ], 00:11:52.090 "product_name": "Malloc disk", 00:11:52.090 "block_size": 512, 00:11:52.090 "num_blocks": 262144, 00:11:52.090 "uuid": "57299709-5bcd-4dc3-9848-a9092dbd3a92", 00:11:52.090 "assigned_rate_limits": { 00:11:52.090 "rw_ios_per_sec": 0, 00:11:52.090 "rw_mbytes_per_sec": 0, 00:11:52.090 "r_mbytes_per_sec": 0, 00:11:52.090 "w_mbytes_per_sec": 0 00:11:52.090 }, 00:11:52.090 "claimed": false, 00:11:52.090 "zoned": false, 00:11:52.090 "supported_io_types": { 00:11:52.090 "read": true, 00:11:52.090 "write": true, 00:11:52.090 "unmap": true, 00:11:52.090 "write_zeroes": true, 00:11:52.090 "flush": true, 00:11:52.090 "reset": true, 00:11:52.090 "compare": false, 00:11:52.090 "compare_and_write": false, 00:11:52.090 "abort": true, 00:11:52.090 "nvme_admin": false, 00:11:52.090 "nvme_io": false 00:11:52.090 }, 00:11:52.090 "memory_domains": [ 00:11:52.090 { 00:11:52.090 "dma_device_id": "system", 00:11:52.090 "dma_device_type": 1 00:11:52.090 }, 00:11:52.090 { 00:11:52.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.090 "dma_device_type": 2 00:11:52.090 } 00:11:52.090 ], 00:11:52.090 "driver_specific": {} 00:11:52.090 } 00:11:52.090 ] 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # return 0 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:11:52.090 23:27:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.090 Running I/O for 10 seconds... 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:11:53.992 "tick_rate": 2200000000, 00:11:53.992 "ticks": 1459781843687, 00:11:53.992 "bdevs": [ 00:11:53.992 { 00:11:53.992 "name": "Malloc_STAT", 00:11:53.992 "bytes_read": 1966117376, 00:11:53.992 "num_read_ops": 480003, 00:11:53.992 "bytes_written": 0, 00:11:53.992 "num_write_ops": 0, 00:11:53.992 "bytes_unmapped": 0, 00:11:53.992 "num_unmap_ops": 0, 00:11:53.992 "bytes_copied": 0, 00:11:53.992 "num_copy_ops": 0, 00:11:53.992 "read_latency_ticks": 2153555914679, 00:11:53.992 "max_read_latency_ticks": 5593020, 00:11:53.992 "min_read_latency_ticks": 233774, 00:11:53.992 "write_latency_ticks": 0, 00:11:53.992 "max_write_latency_ticks": 0, 00:11:53.992 "min_write_latency_ticks": 0, 00:11:53.992 "unmap_latency_ticks": 0, 00:11:53.992 "max_unmap_latency_ticks": 0, 00:11:53.992 "min_unmap_latency_ticks": 0, 00:11:53.992 "copy_latency_ticks": 0, 00:11:53.992 "max_copy_latency_ticks": 0, 00:11:53.992 "min_copy_latency_ticks": 0, 00:11:53.992 "io_error": {} 00:11:53.992 } 00:11:53.992 ] 00:11:53.992 }' 00:11:53.992 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=480003 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:11:54.251 "tick_rate": 2200000000, 00:11:54.251 "ticks": 1459961603328, 00:11:54.251 "name": "Malloc_STAT", 00:11:54.251 "channels": [ 00:11:54.251 { 00:11:54.251 "thread_id": 2, 00:11:54.251 "bytes_read": 1029701632, 00:11:54.251 "num_read_ops": 251392, 00:11:54.251 "bytes_written": 0, 00:11:54.251 "num_write_ops": 0, 00:11:54.251 "bytes_unmapped": 0, 00:11:54.251 "num_unmap_ops": 0, 00:11:54.251 "bytes_copied": 0, 00:11:54.251 "num_copy_ops": 0, 00:11:54.251 "read_latency_ticks": 1122480702376, 00:11:54.251 "max_read_latency_ticks": 7452801, 00:11:54.251 "min_read_latency_ticks": 3355091, 00:11:54.251 "write_latency_ticks": 0, 00:11:54.251 "max_write_latency_ticks": 0, 00:11:54.251 "min_write_latency_ticks": 0, 00:11:54.251 "unmap_latency_ticks": 0, 00:11:54.251 "max_unmap_latency_ticks": 0, 00:11:54.251 "min_unmap_latency_ticks": 0, 00:11:54.251 "copy_latency_ticks": 0, 00:11:54.251 "max_copy_latency_ticks": 0, 00:11:54.251 "min_copy_latency_ticks": 0 00:11:54.251 }, 00:11:54.251 { 00:11:54.251 "thread_id": 3, 00:11:54.251 "bytes_read": 1019215872, 00:11:54.251 "num_read_ops": 248832, 00:11:54.251 "bytes_written": 0, 00:11:54.251 "num_write_ops": 0, 00:11:54.251 "bytes_unmapped": 0, 00:11:54.251 "num_unmap_ops": 0, 00:11:54.251 "bytes_copied": 0, 00:11:54.251 "num_copy_ops": 0, 00:11:54.251 "read_latency_ticks": 1123434518567, 00:11:54.251 "max_read_latency_ticks": 5790554, 00:11:54.251 "min_read_latency_ticks": 3126682, 00:11:54.251 "write_latency_ticks": 0, 00:11:54.251 "max_write_latency_ticks": 0, 00:11:54.251 "min_write_latency_ticks": 0, 00:11:54.251 "unmap_latency_ticks": 0, 00:11:54.251 "max_unmap_latency_ticks": 0, 00:11:54.251 "min_unmap_latency_ticks": 0, 00:11:54.251 "copy_latency_ticks": 0, 00:11:54.251 "max_copy_latency_ticks": 0, 00:11:54.251 "min_copy_latency_ticks": 0 00:11:54.251 } 00:11:54.251 ] 00:11:54.251 }' 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=251392 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=251392 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=248832 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=500224 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:54.251 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.252 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:54.252 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.252 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:11:54.252 "tick_rate": 2200000000, 00:11:54.252 "ticks": 1460258638370, 00:11:54.252 "bdevs": [ 00:11:54.252 { 00:11:54.252 "name": "Malloc_STAT", 00:11:54.252 "bytes_read": 2189464064, 00:11:54.252 "num_read_ops": 534531, 00:11:54.252 "bytes_written": 0, 00:11:54.252 "num_write_ops": 0, 00:11:54.252 "bytes_unmapped": 0, 00:11:54.252 "num_unmap_ops": 0, 00:11:54.252 "bytes_copied": 0, 00:11:54.252 "num_copy_ops": 0, 00:11:54.252 "read_latency_ticks": 2397969922250, 00:11:54.252 "max_read_latency_ticks": 7452801, 00:11:54.252 "min_read_latency_ticks": 233774, 00:11:54.252 "write_latency_ticks": 0, 00:11:54.252 "max_write_latency_ticks": 0, 00:11:54.252 "min_write_latency_ticks": 0, 00:11:54.252 "unmap_latency_ticks": 0, 00:11:54.252 "max_unmap_latency_ticks": 0, 00:11:54.252 "min_unmap_latency_ticks": 0, 00:11:54.252 "copy_latency_ticks": 0, 00:11:54.252 "max_copy_latency_ticks": 0, 00:11:54.252 "min_copy_latency_ticks": 0, 00:11:54.252 "io_error": {} 00:11:54.252 } 00:11:54.252 ] 00:11:54.252 }' 00:11:54.252 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=534531 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 500224 -lt 480003 ']' 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 500224 -gt 534531 ']' 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:54.510 00:11:54.510 Latency(us) 00:11:54.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.510 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:11:54.510 Malloc_STAT : 2.21 126094.80 492.56 0.00 0.00 2026.80 498.97 3395.96 00:11:54.510 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:54.510 Malloc_STAT : 2.21 124547.51 486.51 0.00 0.00 2051.90 372.36 2800.17 00:11:54.510 =================================================================================================================== 00:11:54.510 Total : 250642.31 979.07 0.00 0.00 2039.27 372.36 3395.96 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 52627 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # '[' -z 52627 ']' 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # kill -0 52627 00:11:54.510 0 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # uname 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52627 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:54.510 killing process with pid 52627 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52627' 00:11:54.510 Received shutdown signal, test time was about 2.346830 seconds 00:11:54.510 00:11:54.510 Latency(us) 00:11:54.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.510 =================================================================================================================== 00:11:54.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@965 -- # kill 52627 00:11:54.510 23:27:17 blockdev_general.bdev_stat -- common/autotest_common.sh@970 -- # wait 52627 00:11:55.883 23:27:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:11:55.883 00:11:55.883 real 0m4.922s 00:11:55.883 user 0m9.222s 00:11:55.883 sys 0m0.393s 00:11:55.883 23:27:19 blockdev_general.bdev_stat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.883 23:27:19 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:55.883 ************************************ 00:11:55.883 END TEST bdev_stat 00:11:55.883 ************************************ 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:11:55.883 23:27:19 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:11:55.883 00:11:55.883 real 2m0.902s 00:11:55.883 user 5m25.521s 00:11:55.883 sys 0m8.997s 00:11:55.883 ************************************ 00:11:55.883 END TEST blockdev_general 00:11:55.883 ************************************ 00:11:55.883 23:27:19 blockdev_general -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.883 23:27:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:55.883 23:27:19 -- spdk/autotest.sh@186 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:55.883 23:27:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.883 23:27:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.883 23:27:19 -- common/autotest_common.sh@10 -- # set +x 00:11:55.883 ************************************ 00:11:55.883 START TEST bdev_raid 00:11:55.883 ************************************ 00:11:55.883 23:27:19 bdev_raid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:56.142 * Looking for test storage... 00:11:56.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:56.142 23:27:19 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:56.142 23:27:19 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:11:56.142 23:27:19 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:11:56.142 23:27:19 bdev_raid -- bdev/bdev_raid.sh@800 -- # trap 'on_error_exit;' ERR 00:11:56.142 23:27:19 bdev_raid -- bdev/bdev_raid.sh@802 -- # base_blocklen=512 00:11:56.142 23:27:19 bdev_raid -- bdev/bdev_raid.sh@804 -- # uname -s 00:11:56.142 23:27:19 bdev_raid -- bdev/bdev_raid.sh@804 -- # '[' Linux = Linux ']' 00:11:56.142 23:27:19 bdev_raid -- bdev/bdev_raid.sh@804 -- # modprobe -n nbd 00:11:56.142 modprobe: FATAL: Module nbd not found. 00:11:56.142 23:27:19 bdev_raid -- bdev/bdev_raid.sh@811 -- # run_test raid0_resize_test raid0_resize_test 00:11:56.142 23:27:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:56.142 23:27:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:56.142 23:27:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.142 ************************************ 00:11:56.142 START TEST raid0_resize_test 00:11:56.142 ************************************ 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid0_resize_test 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:11:56.142 Process raid pid: 52796 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # raid_pid=52796 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # echo 'Process raid pid: 52796' 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # waitforlisten 52796 /var/tmp/spdk-raid.sock 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 52796 ']' 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:56.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:56.142 23:27:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.142 [2024-05-14 23:27:19.382322] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:56.142 [2024-05-14 23:27:19.382500] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.400 [2024-05-14 23:27:19.532015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.658 [2024-05-14 23:27:19.743217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.658 [2024-05-14 23:27:19.929455] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.223 23:27:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:57.223 23:27:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:11:57.223 23:27:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:11:57.223 Base_1 00:11:57.223 23:27:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:11:57.481 Base_2 00:11:57.481 23:27:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@363 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:11:57.739 [2024-05-14 23:27:20.789190] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:57.739 [2024-05-14 23:27:20.790740] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:57.739 [2024-05-14 23:27:20.790797] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:11:57.739 [2024-05-14 23:27:20.790810] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:57.740 [2024-05-14 23:27:20.790981] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:11:57.740 [2024-05-14 23:27:20.791229] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:11:57.740 [2024-05-14 23:27:20.791244] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000011500 00:11:57.740 [2024-05-14 23:27:20.791356] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.740 23:27:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:11:57.740 [2024-05-14 23:27:20.969234] bdev_raid.c:2216:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:57.740 [2024-05-14 23:27:20.969273] bdev_raid.c:2229:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:57.740 true 00:11:57.740 23:27:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # jq '.[].num_blocks' 00:11:57.740 23:27:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:57.998 [2024-05-14 23:27:21.193330] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.998 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # blkcnt=131072 00:11:57.998 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # raid_size_mb=64 00:11:57.998 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # '[' 64 '!=' 64 ']' 00:11:57.998 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:11:58.257 [2024-05-14 23:27:21.425235] bdev_raid.c:2216:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:58.257 [2024-05-14 23:27:21.425272] bdev_raid.c:2229:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:58.257 [2024-05-14 23:27:21.425329] bdev_raid.c:2243:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:58.257 true 00:11:58.257 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:58.257 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # jq '.[].num_blocks' 00:11:58.515 [2024-05-14 23:27:21.661366] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # blkcnt=262144 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # raid_size_mb=128 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 52796 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 52796 ']' 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 52796 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52796 00:11:58.515 killing process with pid 52796 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52796' 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 52796 00:11:58.515 23:27:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 52796 00:11:58.515 [2024-05-14 23:27:21.699123] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.515 [2024-05-14 23:27:21.699242] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.515 [2024-05-14 23:27:21.699298] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.515 [2024-05-14 23:27:21.699309] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Raid, state offline 00:11:58.515 [2024-05-14 23:27:21.699743] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.890 23:27:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:59.890 00:11:59.890 real 0m3.599s 00:11:59.890 user 0m4.985s 00:11:59.890 sys 0m0.434s 00:11:59.890 23:27:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:59.890 23:27:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.890 ************************************ 00:11:59.890 END TEST raid0_resize_test 00:11:59.890 ************************************ 00:11:59.890 23:27:22 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:11:59.890 23:27:22 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:11:59.890 23:27:22 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:59.890 23:27:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:59.890 23:27:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:59.890 23:27:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.890 ************************************ 00:11:59.890 START TEST raid_state_function_test 00:11:59.890 ************************************ 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:11:59.890 Process raid pid: 52890 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=52890 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 52890' 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 52890 /var/tmp/spdk-raid.sock 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 52890 ']' 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:59.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:59.890 23:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:59.891 23:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:59.891 23:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:59.891 23:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.891 [2024-05-14 23:27:23.038661] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:11:59.891 [2024-05-14 23:27:23.038861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.149 [2024-05-14 23:27:23.206086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.410 [2024-05-14 23:27:23.441687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.410 [2024-05-14 23:27:23.638378] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.668 23:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:00.668 23:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:12:00.668 23:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:00.926 [2024-05-14 23:27:24.085936] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.926 [2024-05-14 23:27:24.086026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.926 [2024-05-14 23:27:24.086058] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.926 [2024-05-14 23:27:24.086076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.926 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.184 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:01.184 "name": "Existed_Raid", 00:12:01.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.184 "strip_size_kb": 64, 00:12:01.184 "state": "configuring", 00:12:01.184 "raid_level": "raid0", 00:12:01.184 "superblock": false, 00:12:01.184 "num_base_bdevs": 2, 00:12:01.184 "num_base_bdevs_discovered": 0, 00:12:01.184 "num_base_bdevs_operational": 2, 00:12:01.184 "base_bdevs_list": [ 00:12:01.184 { 00:12:01.184 "name": "BaseBdev1", 00:12:01.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.184 "is_configured": false, 00:12:01.184 "data_offset": 0, 00:12:01.184 "data_size": 0 00:12:01.184 }, 00:12:01.184 { 00:12:01.184 "name": "BaseBdev2", 00:12:01.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.184 "is_configured": false, 00:12:01.184 "data_offset": 0, 00:12:01.184 "data_size": 0 00:12:01.184 } 00:12:01.184 ] 00:12:01.184 }' 00:12:01.184 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:01.184 23:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.749 23:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:02.006 [2024-05-14 23:27:25.078025] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.006 [2024-05-14 23:27:25.078074] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:12:02.006 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:02.006 [2024-05-14 23:27:25.258034] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.006 [2024-05-14 23:27:25.258143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.006 [2024-05-14 23:27:25.258202] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.007 [2024-05-14 23:27:25.258230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.007 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.264 BaseBdev1 00:12:02.264 [2024-05-14 23:27:25.480448] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.264 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:02.264 23:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:02.264 23:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:02.264 23:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:02.264 23:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:02.264 23:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:02.264 23:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:02.522 23:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.780 [ 00:12:02.781 { 00:12:02.781 "name": "BaseBdev1", 00:12:02.781 "aliases": [ 00:12:02.781 "77c809c8-f005-47f9-b135-aa17a0feaf1a" 00:12:02.781 ], 00:12:02.781 "product_name": "Malloc disk", 00:12:02.781 "block_size": 512, 00:12:02.781 "num_blocks": 65536, 00:12:02.781 "uuid": "77c809c8-f005-47f9-b135-aa17a0feaf1a", 00:12:02.781 "assigned_rate_limits": { 00:12:02.781 "rw_ios_per_sec": 0, 00:12:02.781 "rw_mbytes_per_sec": 0, 00:12:02.781 "r_mbytes_per_sec": 0, 00:12:02.781 "w_mbytes_per_sec": 0 00:12:02.781 }, 00:12:02.781 "claimed": true, 00:12:02.781 "claim_type": "exclusive_write", 00:12:02.781 "zoned": false, 00:12:02.781 "supported_io_types": { 00:12:02.781 "read": true, 00:12:02.781 "write": true, 00:12:02.781 "unmap": true, 00:12:02.781 "write_zeroes": true, 00:12:02.781 "flush": true, 00:12:02.781 "reset": true, 00:12:02.781 "compare": false, 00:12:02.781 "compare_and_write": false, 00:12:02.781 "abort": true, 00:12:02.781 "nvme_admin": false, 00:12:02.781 "nvme_io": false 00:12:02.781 }, 00:12:02.781 "memory_domains": [ 00:12:02.781 { 00:12:02.781 "dma_device_id": "system", 00:12:02.781 "dma_device_type": 1 00:12:02.781 }, 00:12:02.781 { 00:12:02.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.781 "dma_device_type": 2 00:12:02.781 } 00:12:02.781 ], 00:12:02.781 "driver_specific": {} 00:12:02.781 } 00:12:02.781 ] 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.781 23:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.039 23:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:03.039 "name": "Existed_Raid", 00:12:03.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.039 "strip_size_kb": 64, 00:12:03.039 "state": "configuring", 00:12:03.039 "raid_level": "raid0", 00:12:03.039 "superblock": false, 00:12:03.039 "num_base_bdevs": 2, 00:12:03.039 "num_base_bdevs_discovered": 1, 00:12:03.039 "num_base_bdevs_operational": 2, 00:12:03.039 "base_bdevs_list": [ 00:12:03.039 { 00:12:03.039 "name": "BaseBdev1", 00:12:03.039 "uuid": "77c809c8-f005-47f9-b135-aa17a0feaf1a", 00:12:03.039 "is_configured": true, 00:12:03.039 "data_offset": 0, 00:12:03.039 "data_size": 65536 00:12:03.039 }, 00:12:03.039 { 00:12:03.039 "name": "BaseBdev2", 00:12:03.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.039 "is_configured": false, 00:12:03.039 "data_offset": 0, 00:12:03.039 "data_size": 0 00:12:03.039 } 00:12:03.039 ] 00:12:03.039 }' 00:12:03.039 23:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:03.039 23:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.604 23:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:03.862 [2024-05-14 23:27:26.928750] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.862 [2024-05-14 23:27:26.928810] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:12:03.862 23:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:03.862 [2024-05-14 23:27:27.140843] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.862 [2024-05-14 23:27:27.142643] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.862 [2024-05-14 23:27:27.142735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:04.120 "name": "Existed_Raid", 00:12:04.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.120 "strip_size_kb": 64, 00:12:04.120 "state": "configuring", 00:12:04.120 "raid_level": "raid0", 00:12:04.120 "superblock": false, 00:12:04.120 "num_base_bdevs": 2, 00:12:04.120 "num_base_bdevs_discovered": 1, 00:12:04.120 "num_base_bdevs_operational": 2, 00:12:04.120 "base_bdevs_list": [ 00:12:04.120 { 00:12:04.120 "name": "BaseBdev1", 00:12:04.120 "uuid": "77c809c8-f005-47f9-b135-aa17a0feaf1a", 00:12:04.120 "is_configured": true, 00:12:04.120 "data_offset": 0, 00:12:04.120 "data_size": 65536 00:12:04.120 }, 00:12:04.120 { 00:12:04.120 "name": "BaseBdev2", 00:12:04.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.120 "is_configured": false, 00:12:04.120 "data_offset": 0, 00:12:04.120 "data_size": 0 00:12:04.120 } 00:12:04.120 ] 00:12:04.120 }' 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:04.120 23:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.053 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.053 [2024-05-14 23:27:28.267620] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.053 [2024-05-14 23:27:28.267667] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:12:05.053 [2024-05-14 23:27:28.267677] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:05.053 [2024-05-14 23:27:28.267783] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:12:05.053 [2024-05-14 23:27:28.267988] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:12:05.053 [2024-05-14 23:27:28.268003] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:12:05.053 BaseBdev2 00:12:05.053 [2024-05-14 23:27:28.268544] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.053 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:05.053 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:05.053 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:05.053 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:05.053 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:05.053 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:05.053 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:05.311 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.569 [ 00:12:05.569 { 00:12:05.569 "name": "BaseBdev2", 00:12:05.569 "aliases": [ 00:12:05.569 "f9fead22-59fa-4e3e-bb86-3f6f5ca3ed01" 00:12:05.569 ], 00:12:05.569 "product_name": "Malloc disk", 00:12:05.569 "block_size": 512, 00:12:05.569 "num_blocks": 65536, 00:12:05.569 "uuid": "f9fead22-59fa-4e3e-bb86-3f6f5ca3ed01", 00:12:05.569 "assigned_rate_limits": { 00:12:05.569 "rw_ios_per_sec": 0, 00:12:05.569 "rw_mbytes_per_sec": 0, 00:12:05.569 "r_mbytes_per_sec": 0, 00:12:05.569 "w_mbytes_per_sec": 0 00:12:05.569 }, 00:12:05.569 "claimed": true, 00:12:05.569 "claim_type": "exclusive_write", 00:12:05.569 "zoned": false, 00:12:05.569 "supported_io_types": { 00:12:05.569 "read": true, 00:12:05.569 "write": true, 00:12:05.569 "unmap": true, 00:12:05.569 "write_zeroes": true, 00:12:05.569 "flush": true, 00:12:05.569 "reset": true, 00:12:05.569 "compare": false, 00:12:05.569 "compare_and_write": false, 00:12:05.569 "abort": true, 00:12:05.569 "nvme_admin": false, 00:12:05.569 "nvme_io": false 00:12:05.569 }, 00:12:05.569 "memory_domains": [ 00:12:05.569 { 00:12:05.569 "dma_device_id": "system", 00:12:05.569 "dma_device_type": 1 00:12:05.569 }, 00:12:05.569 { 00:12:05.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.569 "dma_device_type": 2 00:12:05.569 } 00:12:05.569 ], 00:12:05.569 "driver_specific": {} 00:12:05.569 } 00:12:05.569 ] 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.569 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.827 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:05.827 "name": "Existed_Raid", 00:12:05.827 "uuid": "65423fda-82cb-4244-bac8-e7f3e3b639c0", 00:12:05.827 "strip_size_kb": 64, 00:12:05.827 "state": "online", 00:12:05.827 "raid_level": "raid0", 00:12:05.827 "superblock": false, 00:12:05.827 "num_base_bdevs": 2, 00:12:05.827 "num_base_bdevs_discovered": 2, 00:12:05.827 "num_base_bdevs_operational": 2, 00:12:05.827 "base_bdevs_list": [ 00:12:05.827 { 00:12:05.827 "name": "BaseBdev1", 00:12:05.827 "uuid": "77c809c8-f005-47f9-b135-aa17a0feaf1a", 00:12:05.827 "is_configured": true, 00:12:05.827 "data_offset": 0, 00:12:05.827 "data_size": 65536 00:12:05.827 }, 00:12:05.827 { 00:12:05.827 "name": "BaseBdev2", 00:12:05.827 "uuid": "f9fead22-59fa-4e3e-bb86-3f6f5ca3ed01", 00:12:05.827 "is_configured": true, 00:12:05.827 "data_offset": 0, 00:12:05.827 "data_size": 65536 00:12:05.827 } 00:12:05.827 ] 00:12:05.827 }' 00:12:05.827 23:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:05.827 23:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:06.394 [2024-05-14 23:27:29.652085] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:06.394 "name": "Existed_Raid", 00:12:06.394 "aliases": [ 00:12:06.394 "65423fda-82cb-4244-bac8-e7f3e3b639c0" 00:12:06.394 ], 00:12:06.394 "product_name": "Raid Volume", 00:12:06.394 "block_size": 512, 00:12:06.394 "num_blocks": 131072, 00:12:06.394 "uuid": "65423fda-82cb-4244-bac8-e7f3e3b639c0", 00:12:06.394 "assigned_rate_limits": { 00:12:06.394 "rw_ios_per_sec": 0, 00:12:06.394 "rw_mbytes_per_sec": 0, 00:12:06.394 "r_mbytes_per_sec": 0, 00:12:06.394 "w_mbytes_per_sec": 0 00:12:06.394 }, 00:12:06.394 "claimed": false, 00:12:06.394 "zoned": false, 00:12:06.394 "supported_io_types": { 00:12:06.394 "read": true, 00:12:06.394 "write": true, 00:12:06.394 "unmap": true, 00:12:06.394 "write_zeroes": true, 00:12:06.394 "flush": true, 00:12:06.394 "reset": true, 00:12:06.394 "compare": false, 00:12:06.394 "compare_and_write": false, 00:12:06.394 "abort": false, 00:12:06.394 "nvme_admin": false, 00:12:06.394 "nvme_io": false 00:12:06.394 }, 00:12:06.394 "memory_domains": [ 00:12:06.394 { 00:12:06.394 "dma_device_id": "system", 00:12:06.394 "dma_device_type": 1 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.394 "dma_device_type": 2 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "dma_device_id": "system", 00:12:06.394 "dma_device_type": 1 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.394 "dma_device_type": 2 00:12:06.394 } 00:12:06.394 ], 00:12:06.394 "driver_specific": { 00:12:06.394 "raid": { 00:12:06.394 "uuid": "65423fda-82cb-4244-bac8-e7f3e3b639c0", 00:12:06.394 "strip_size_kb": 64, 00:12:06.394 "state": "online", 00:12:06.394 "raid_level": "raid0", 00:12:06.394 "superblock": false, 00:12:06.394 "num_base_bdevs": 2, 00:12:06.394 "num_base_bdevs_discovered": 2, 00:12:06.394 "num_base_bdevs_operational": 2, 00:12:06.394 "base_bdevs_list": [ 00:12:06.394 { 00:12:06.394 "name": "BaseBdev1", 00:12:06.394 "uuid": "77c809c8-f005-47f9-b135-aa17a0feaf1a", 00:12:06.394 "is_configured": true, 00:12:06.394 "data_offset": 0, 00:12:06.394 "data_size": 65536 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "name": "BaseBdev2", 00:12:06.394 "uuid": "f9fead22-59fa-4e3e-bb86-3f6f5ca3ed01", 00:12:06.394 "is_configured": true, 00:12:06.394 "data_offset": 0, 00:12:06.394 "data_size": 65536 00:12:06.394 } 00:12:06.394 ] 00:12:06.394 } 00:12:06.394 } 00:12:06.394 }' 00:12:06.394 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.652 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:06.652 BaseBdev2' 00:12:06.652 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:06.652 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:06.652 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:06.652 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:06.652 "name": "BaseBdev1", 00:12:06.652 "aliases": [ 00:12:06.652 "77c809c8-f005-47f9-b135-aa17a0feaf1a" 00:12:06.652 ], 00:12:06.652 "product_name": "Malloc disk", 00:12:06.652 "block_size": 512, 00:12:06.652 "num_blocks": 65536, 00:12:06.652 "uuid": "77c809c8-f005-47f9-b135-aa17a0feaf1a", 00:12:06.652 "assigned_rate_limits": { 00:12:06.652 "rw_ios_per_sec": 0, 00:12:06.652 "rw_mbytes_per_sec": 0, 00:12:06.652 "r_mbytes_per_sec": 0, 00:12:06.652 "w_mbytes_per_sec": 0 00:12:06.652 }, 00:12:06.652 "claimed": true, 00:12:06.652 "claim_type": "exclusive_write", 00:12:06.652 "zoned": false, 00:12:06.652 "supported_io_types": { 00:12:06.652 "read": true, 00:12:06.652 "write": true, 00:12:06.652 "unmap": true, 00:12:06.652 "write_zeroes": true, 00:12:06.652 "flush": true, 00:12:06.652 "reset": true, 00:12:06.652 "compare": false, 00:12:06.652 "compare_and_write": false, 00:12:06.652 "abort": true, 00:12:06.652 "nvme_admin": false, 00:12:06.652 "nvme_io": false 00:12:06.652 }, 00:12:06.652 "memory_domains": [ 00:12:06.652 { 00:12:06.652 "dma_device_id": "system", 00:12:06.652 "dma_device_type": 1 00:12:06.652 }, 00:12:06.652 { 00:12:06.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.652 "dma_device_type": 2 00:12:06.652 } 00:12:06.652 ], 00:12:06.652 "driver_specific": {} 00:12:06.652 }' 00:12:06.652 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:06.910 23:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:06.910 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:06.910 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:06.910 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:06.910 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.910 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:07.167 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:07.167 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.167 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:07.167 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:07.167 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:07.167 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:07.167 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:07.167 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:07.425 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:07.425 "name": "BaseBdev2", 00:12:07.425 "aliases": [ 00:12:07.425 "f9fead22-59fa-4e3e-bb86-3f6f5ca3ed01" 00:12:07.425 ], 00:12:07.425 "product_name": "Malloc disk", 00:12:07.425 "block_size": 512, 00:12:07.425 "num_blocks": 65536, 00:12:07.425 "uuid": "f9fead22-59fa-4e3e-bb86-3f6f5ca3ed01", 00:12:07.425 "assigned_rate_limits": { 00:12:07.425 "rw_ios_per_sec": 0, 00:12:07.425 "rw_mbytes_per_sec": 0, 00:12:07.425 "r_mbytes_per_sec": 0, 00:12:07.425 "w_mbytes_per_sec": 0 00:12:07.425 }, 00:12:07.425 "claimed": true, 00:12:07.425 "claim_type": "exclusive_write", 00:12:07.425 "zoned": false, 00:12:07.425 "supported_io_types": { 00:12:07.425 "read": true, 00:12:07.425 "write": true, 00:12:07.425 "unmap": true, 00:12:07.425 "write_zeroes": true, 00:12:07.425 "flush": true, 00:12:07.425 "reset": true, 00:12:07.425 "compare": false, 00:12:07.425 "compare_and_write": false, 00:12:07.425 "abort": true, 00:12:07.425 "nvme_admin": false, 00:12:07.425 "nvme_io": false 00:12:07.425 }, 00:12:07.425 "memory_domains": [ 00:12:07.425 { 00:12:07.425 "dma_device_id": "system", 00:12:07.425 "dma_device_type": 1 00:12:07.425 }, 00:12:07.425 { 00:12:07.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.425 "dma_device_type": 2 00:12:07.425 } 00:12:07.425 ], 00:12:07.426 "driver_specific": {} 00:12:07.426 }' 00:12:07.426 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:07.426 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:07.426 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:07.426 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:07.683 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:07.683 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.683 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:07.683 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:07.683 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.683 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:07.941 23:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:07.941 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:07.941 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:08.199 [2024-05-14 23:27:31.256216] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.199 [2024-05-14 23:27:31.256256] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.199 [2024-05-14 23:27:31.256302] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.199 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:08.199 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:12:08.199 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:08.199 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:12:08.199 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:12:08.199 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:08.199 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.200 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.457 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:08.457 "name": "Existed_Raid", 00:12:08.457 "uuid": "65423fda-82cb-4244-bac8-e7f3e3b639c0", 00:12:08.457 "strip_size_kb": 64, 00:12:08.457 "state": "offline", 00:12:08.457 "raid_level": "raid0", 00:12:08.457 "superblock": false, 00:12:08.457 "num_base_bdevs": 2, 00:12:08.458 "num_base_bdevs_discovered": 1, 00:12:08.458 "num_base_bdevs_operational": 1, 00:12:08.458 "base_bdevs_list": [ 00:12:08.458 { 00:12:08.458 "name": null, 00:12:08.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.458 "is_configured": false, 00:12:08.458 "data_offset": 0, 00:12:08.458 "data_size": 65536 00:12:08.458 }, 00:12:08.458 { 00:12:08.458 "name": "BaseBdev2", 00:12:08.458 "uuid": "f9fead22-59fa-4e3e-bb86-3f6f5ca3ed01", 00:12:08.458 "is_configured": true, 00:12:08.458 "data_offset": 0, 00:12:08.458 "data_size": 65536 00:12:08.458 } 00:12:08.458 ] 00:12:08.458 }' 00:12:08.458 23:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:08.458 23:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.024 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:09.024 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:09.024 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.024 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:09.281 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:09.282 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.282 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:09.539 [2024-05-14 23:27:32.766190] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:09.540 [2024-05-14 23:27:32.766260] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:12:09.798 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:09.798 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:09.798 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.798 23:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 52890 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 52890 ']' 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 52890 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 52890 00:12:10.056 killing process with pid 52890 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52890' 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 52890 00:12:10.056 23:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 52890 00:12:10.056 [2024-05-14 23:27:33.115955] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.056 [2024-05-14 23:27:33.116086] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.429 ************************************ 00:12:11.429 END TEST raid_state_function_test 00:12:11.429 ************************************ 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:12:11.429 00:12:11.429 real 0m11.431s 00:12:11.429 user 0m20.186s 00:12:11.429 sys 0m1.205s 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.429 23:27:34 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:11.429 23:27:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:11.429 23:27:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.429 23:27:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.429 ************************************ 00:12:11.429 START TEST raid_state_function_test_sb 00:12:11.429 ************************************ 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:11.429 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:11.430 Process raid pid: 53267 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=53267 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 53267' 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 53267 /var/tmp/spdk-raid.sock 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 53267 ']' 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:11.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:11.430 23:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.430 [2024-05-14 23:27:34.530316] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:12:11.430 [2024-05-14 23:27:34.530655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.430 [2024-05-14 23:27:34.703988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.688 [2024-05-14 23:27:34.955364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.946 [2024-05-14 23:27:35.143905] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.204 23:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:12.204 23:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:12:12.204 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:12.463 [2024-05-14 23:27:35.569593] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.463 [2024-05-14 23:27:35.569689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.463 [2024-05-14 23:27:35.569720] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.463 [2024-05-14 23:27:35.569761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.463 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.721 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:12.721 "name": "Existed_Raid", 00:12:12.721 "uuid": "23a7d624-f08e-46ef-8bb4-06c73025c4f1", 00:12:12.721 "strip_size_kb": 64, 00:12:12.721 "state": "configuring", 00:12:12.721 "raid_level": "raid0", 00:12:12.721 "superblock": true, 00:12:12.721 "num_base_bdevs": 2, 00:12:12.721 "num_base_bdevs_discovered": 0, 00:12:12.721 "num_base_bdevs_operational": 2, 00:12:12.721 "base_bdevs_list": [ 00:12:12.721 { 00:12:12.721 "name": "BaseBdev1", 00:12:12.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.721 "is_configured": false, 00:12:12.721 "data_offset": 0, 00:12:12.721 "data_size": 0 00:12:12.721 }, 00:12:12.721 { 00:12:12.721 "name": "BaseBdev2", 00:12:12.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.721 "is_configured": false, 00:12:12.721 "data_offset": 0, 00:12:12.721 "data_size": 0 00:12:12.721 } 00:12:12.721 ] 00:12:12.721 }' 00:12:12.721 23:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:12.721 23:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.287 23:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:13.545 [2024-05-14 23:27:36.597556] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.545 [2024-05-14 23:27:36.597601] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:12:13.545 23:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:13.545 [2024-05-14 23:27:36.785623] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.545 [2024-05-14 23:27:36.785728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.545 [2024-05-14 23:27:36.785760] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.545 [2024-05-14 23:27:36.785785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.545 23:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.803 [2024-05-14 23:27:37.019504] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.803 BaseBdev1 00:12:13.803 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:13.803 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:13.803 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:13.803 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:13.803 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:13.803 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:13.803 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:14.060 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:14.318 [ 00:12:14.318 { 00:12:14.318 "name": "BaseBdev1", 00:12:14.318 "aliases": [ 00:12:14.318 "5f41b482-50ca-41c5-b555-5dc701454130" 00:12:14.318 ], 00:12:14.318 "product_name": "Malloc disk", 00:12:14.318 "block_size": 512, 00:12:14.318 "num_blocks": 65536, 00:12:14.318 "uuid": "5f41b482-50ca-41c5-b555-5dc701454130", 00:12:14.318 "assigned_rate_limits": { 00:12:14.318 "rw_ios_per_sec": 0, 00:12:14.318 "rw_mbytes_per_sec": 0, 00:12:14.318 "r_mbytes_per_sec": 0, 00:12:14.318 "w_mbytes_per_sec": 0 00:12:14.318 }, 00:12:14.318 "claimed": true, 00:12:14.318 "claim_type": "exclusive_write", 00:12:14.318 "zoned": false, 00:12:14.318 "supported_io_types": { 00:12:14.318 "read": true, 00:12:14.318 "write": true, 00:12:14.318 "unmap": true, 00:12:14.318 "write_zeroes": true, 00:12:14.318 "flush": true, 00:12:14.318 "reset": true, 00:12:14.318 "compare": false, 00:12:14.318 "compare_and_write": false, 00:12:14.318 "abort": true, 00:12:14.318 "nvme_admin": false, 00:12:14.318 "nvme_io": false 00:12:14.318 }, 00:12:14.318 "memory_domains": [ 00:12:14.318 { 00:12:14.318 "dma_device_id": "system", 00:12:14.318 "dma_device_type": 1 00:12:14.318 }, 00:12:14.318 { 00:12:14.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.318 "dma_device_type": 2 00:12:14.318 } 00:12:14.318 ], 00:12:14.318 "driver_specific": {} 00:12:14.318 } 00:12:14.318 ] 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.318 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.576 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:14.576 "name": "Existed_Raid", 00:12:14.576 "uuid": "e889cbe6-2b86-438b-8306-049c6d6a89cf", 00:12:14.576 "strip_size_kb": 64, 00:12:14.576 "state": "configuring", 00:12:14.576 "raid_level": "raid0", 00:12:14.576 "superblock": true, 00:12:14.576 "num_base_bdevs": 2, 00:12:14.576 "num_base_bdevs_discovered": 1, 00:12:14.576 "num_base_bdevs_operational": 2, 00:12:14.576 "base_bdevs_list": [ 00:12:14.576 { 00:12:14.576 "name": "BaseBdev1", 00:12:14.576 "uuid": "5f41b482-50ca-41c5-b555-5dc701454130", 00:12:14.576 "is_configured": true, 00:12:14.576 "data_offset": 2048, 00:12:14.576 "data_size": 63488 00:12:14.576 }, 00:12:14.576 { 00:12:14.576 "name": "BaseBdev2", 00:12:14.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.576 "is_configured": false, 00:12:14.576 "data_offset": 0, 00:12:14.576 "data_size": 0 00:12:14.576 } 00:12:14.576 ] 00:12:14.576 }' 00:12:14.576 23:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:14.576 23:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:15.401 [2024-05-14 23:27:38.459766] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.401 [2024-05-14 23:27:38.459820] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:12:15.401 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:15.660 [2024-05-14 23:27:38.691835] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.660 [2024-05-14 23:27:38.693511] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.660 [2024-05-14 23:27:38.693566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:15.660 "name": "Existed_Raid", 00:12:15.660 "uuid": "8bb89a8e-ee12-455a-a64c-2d4e4d711972", 00:12:15.660 "strip_size_kb": 64, 00:12:15.660 "state": "configuring", 00:12:15.660 "raid_level": "raid0", 00:12:15.660 "superblock": true, 00:12:15.660 "num_base_bdevs": 2, 00:12:15.660 "num_base_bdevs_discovered": 1, 00:12:15.660 "num_base_bdevs_operational": 2, 00:12:15.660 "base_bdevs_list": [ 00:12:15.660 { 00:12:15.660 "name": "BaseBdev1", 00:12:15.660 "uuid": "5f41b482-50ca-41c5-b555-5dc701454130", 00:12:15.660 "is_configured": true, 00:12:15.660 "data_offset": 2048, 00:12:15.660 "data_size": 63488 00:12:15.660 }, 00:12:15.660 { 00:12:15.660 "name": "BaseBdev2", 00:12:15.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.660 "is_configured": false, 00:12:15.660 "data_offset": 0, 00:12:15.660 "data_size": 0 00:12:15.660 } 00:12:15.660 ] 00:12:15.660 }' 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:15.660 23:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.594 23:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.594 [2024-05-14 23:27:39.791795] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.594 [2024-05-14 23:27:39.791970] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:12:16.594 [2024-05-14 23:27:39.791985] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:16.594 [2024-05-14 23:27:39.792089] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:12:16.595 BaseBdev2 00:12:16.595 [2024-05-14 23:27:39.792636] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:12:16.595 [2024-05-14 23:27:39.792657] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:12:16.595 [2024-05-14 23:27:39.792793] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.595 23:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:16.595 23:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:16.595 23:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:16.595 23:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:16.595 23:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:16.595 23:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:16.595 23:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:16.853 23:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.113 [ 00:12:17.113 { 00:12:17.113 "name": "BaseBdev2", 00:12:17.113 "aliases": [ 00:12:17.113 "2953c70d-985c-4bfb-a745-c6696eac45a7" 00:12:17.113 ], 00:12:17.113 "product_name": "Malloc disk", 00:12:17.113 "block_size": 512, 00:12:17.113 "num_blocks": 65536, 00:12:17.113 "uuid": "2953c70d-985c-4bfb-a745-c6696eac45a7", 00:12:17.113 "assigned_rate_limits": { 00:12:17.113 "rw_ios_per_sec": 0, 00:12:17.113 "rw_mbytes_per_sec": 0, 00:12:17.113 "r_mbytes_per_sec": 0, 00:12:17.113 "w_mbytes_per_sec": 0 00:12:17.113 }, 00:12:17.113 "claimed": true, 00:12:17.113 "claim_type": "exclusive_write", 00:12:17.113 "zoned": false, 00:12:17.113 "supported_io_types": { 00:12:17.113 "read": true, 00:12:17.113 "write": true, 00:12:17.113 "unmap": true, 00:12:17.113 "write_zeroes": true, 00:12:17.113 "flush": true, 00:12:17.113 "reset": true, 00:12:17.113 "compare": false, 00:12:17.113 "compare_and_write": false, 00:12:17.113 "abort": true, 00:12:17.113 "nvme_admin": false, 00:12:17.113 "nvme_io": false 00:12:17.113 }, 00:12:17.113 "memory_domains": [ 00:12:17.113 { 00:12:17.113 "dma_device_id": "system", 00:12:17.113 "dma_device_type": 1 00:12:17.113 }, 00:12:17.113 { 00:12:17.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.113 "dma_device_type": 2 00:12:17.113 } 00:12:17.113 ], 00:12:17.113 "driver_specific": {} 00:12:17.113 } 00:12:17.113 ] 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.113 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.404 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:17.404 "name": "Existed_Raid", 00:12:17.404 "uuid": "8bb89a8e-ee12-455a-a64c-2d4e4d711972", 00:12:17.404 "strip_size_kb": 64, 00:12:17.404 "state": "online", 00:12:17.404 "raid_level": "raid0", 00:12:17.404 "superblock": true, 00:12:17.404 "num_base_bdevs": 2, 00:12:17.404 "num_base_bdevs_discovered": 2, 00:12:17.404 "num_base_bdevs_operational": 2, 00:12:17.404 "base_bdevs_list": [ 00:12:17.404 { 00:12:17.404 "name": "BaseBdev1", 00:12:17.404 "uuid": "5f41b482-50ca-41c5-b555-5dc701454130", 00:12:17.404 "is_configured": true, 00:12:17.404 "data_offset": 2048, 00:12:17.404 "data_size": 63488 00:12:17.404 }, 00:12:17.404 { 00:12:17.404 "name": "BaseBdev2", 00:12:17.404 "uuid": "2953c70d-985c-4bfb-a745-c6696eac45a7", 00:12:17.404 "is_configured": true, 00:12:17.404 "data_offset": 2048, 00:12:17.404 "data_size": 63488 00:12:17.404 } 00:12:17.404 ] 00:12:17.404 }' 00:12:17.404 23:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:17.404 23:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:17.970 [2024-05-14 23:27:41.200233] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:17.970 "name": "Existed_Raid", 00:12:17.970 "aliases": [ 00:12:17.970 "8bb89a8e-ee12-455a-a64c-2d4e4d711972" 00:12:17.970 ], 00:12:17.970 "product_name": "Raid Volume", 00:12:17.970 "block_size": 512, 00:12:17.970 "num_blocks": 126976, 00:12:17.970 "uuid": "8bb89a8e-ee12-455a-a64c-2d4e4d711972", 00:12:17.970 "assigned_rate_limits": { 00:12:17.970 "rw_ios_per_sec": 0, 00:12:17.970 "rw_mbytes_per_sec": 0, 00:12:17.970 "r_mbytes_per_sec": 0, 00:12:17.970 "w_mbytes_per_sec": 0 00:12:17.970 }, 00:12:17.970 "claimed": false, 00:12:17.970 "zoned": false, 00:12:17.970 "supported_io_types": { 00:12:17.970 "read": true, 00:12:17.970 "write": true, 00:12:17.970 "unmap": true, 00:12:17.970 "write_zeroes": true, 00:12:17.970 "flush": true, 00:12:17.970 "reset": true, 00:12:17.970 "compare": false, 00:12:17.970 "compare_and_write": false, 00:12:17.970 "abort": false, 00:12:17.970 "nvme_admin": false, 00:12:17.970 "nvme_io": false 00:12:17.970 }, 00:12:17.970 "memory_domains": [ 00:12:17.970 { 00:12:17.970 "dma_device_id": "system", 00:12:17.970 "dma_device_type": 1 00:12:17.970 }, 00:12:17.970 { 00:12:17.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.970 "dma_device_type": 2 00:12:17.970 }, 00:12:17.970 { 00:12:17.970 "dma_device_id": "system", 00:12:17.970 "dma_device_type": 1 00:12:17.970 }, 00:12:17.970 { 00:12:17.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.970 "dma_device_type": 2 00:12:17.970 } 00:12:17.970 ], 00:12:17.970 "driver_specific": { 00:12:17.970 "raid": { 00:12:17.970 "uuid": "8bb89a8e-ee12-455a-a64c-2d4e4d711972", 00:12:17.970 "strip_size_kb": 64, 00:12:17.970 "state": "online", 00:12:17.970 "raid_level": "raid0", 00:12:17.970 "superblock": true, 00:12:17.970 "num_base_bdevs": 2, 00:12:17.970 "num_base_bdevs_discovered": 2, 00:12:17.970 "num_base_bdevs_operational": 2, 00:12:17.970 "base_bdevs_list": [ 00:12:17.970 { 00:12:17.970 "name": "BaseBdev1", 00:12:17.970 "uuid": "5f41b482-50ca-41c5-b555-5dc701454130", 00:12:17.970 "is_configured": true, 00:12:17.970 "data_offset": 2048, 00:12:17.970 "data_size": 63488 00:12:17.970 }, 00:12:17.970 { 00:12:17.970 "name": "BaseBdev2", 00:12:17.970 "uuid": "2953c70d-985c-4bfb-a745-c6696eac45a7", 00:12:17.970 "is_configured": true, 00:12:17.970 "data_offset": 2048, 00:12:17.970 "data_size": 63488 00:12:17.970 } 00:12:17.970 ] 00:12:17.970 } 00:12:17.970 } 00:12:17.970 }' 00:12:17.970 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.229 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:18.229 BaseBdev2' 00:12:18.229 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:18.229 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:18.229 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:18.229 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:18.229 "name": "BaseBdev1", 00:12:18.229 "aliases": [ 00:12:18.229 "5f41b482-50ca-41c5-b555-5dc701454130" 00:12:18.229 ], 00:12:18.229 "product_name": "Malloc disk", 00:12:18.229 "block_size": 512, 00:12:18.229 "num_blocks": 65536, 00:12:18.229 "uuid": "5f41b482-50ca-41c5-b555-5dc701454130", 00:12:18.229 "assigned_rate_limits": { 00:12:18.229 "rw_ios_per_sec": 0, 00:12:18.229 "rw_mbytes_per_sec": 0, 00:12:18.229 "r_mbytes_per_sec": 0, 00:12:18.229 "w_mbytes_per_sec": 0 00:12:18.229 }, 00:12:18.229 "claimed": true, 00:12:18.229 "claim_type": "exclusive_write", 00:12:18.229 "zoned": false, 00:12:18.229 "supported_io_types": { 00:12:18.229 "read": true, 00:12:18.229 "write": true, 00:12:18.229 "unmap": true, 00:12:18.229 "write_zeroes": true, 00:12:18.229 "flush": true, 00:12:18.229 "reset": true, 00:12:18.229 "compare": false, 00:12:18.229 "compare_and_write": false, 00:12:18.229 "abort": true, 00:12:18.229 "nvme_admin": false, 00:12:18.229 "nvme_io": false 00:12:18.229 }, 00:12:18.229 "memory_domains": [ 00:12:18.229 { 00:12:18.229 "dma_device_id": "system", 00:12:18.229 "dma_device_type": 1 00:12:18.229 }, 00:12:18.229 { 00:12:18.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.229 "dma_device_type": 2 00:12:18.229 } 00:12:18.229 ], 00:12:18.229 "driver_specific": {} 00:12:18.229 }' 00:12:18.229 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:18.487 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:18.487 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:18.487 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:18.487 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:18.487 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:18.487 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:18.487 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:18.745 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:18.745 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:18.745 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:18.745 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:18.745 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:18.746 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:18.746 23:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:19.004 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:19.004 "name": "BaseBdev2", 00:12:19.004 "aliases": [ 00:12:19.004 "2953c70d-985c-4bfb-a745-c6696eac45a7" 00:12:19.004 ], 00:12:19.004 "product_name": "Malloc disk", 00:12:19.004 "block_size": 512, 00:12:19.004 "num_blocks": 65536, 00:12:19.004 "uuid": "2953c70d-985c-4bfb-a745-c6696eac45a7", 00:12:19.004 "assigned_rate_limits": { 00:12:19.004 "rw_ios_per_sec": 0, 00:12:19.004 "rw_mbytes_per_sec": 0, 00:12:19.004 "r_mbytes_per_sec": 0, 00:12:19.004 "w_mbytes_per_sec": 0 00:12:19.004 }, 00:12:19.004 "claimed": true, 00:12:19.004 "claim_type": "exclusive_write", 00:12:19.004 "zoned": false, 00:12:19.004 "supported_io_types": { 00:12:19.004 "read": true, 00:12:19.004 "write": true, 00:12:19.004 "unmap": true, 00:12:19.004 "write_zeroes": true, 00:12:19.004 "flush": true, 00:12:19.004 "reset": true, 00:12:19.004 "compare": false, 00:12:19.004 "compare_and_write": false, 00:12:19.004 "abort": true, 00:12:19.004 "nvme_admin": false, 00:12:19.004 "nvme_io": false 00:12:19.004 }, 00:12:19.004 "memory_domains": [ 00:12:19.004 { 00:12:19.004 "dma_device_id": "system", 00:12:19.004 "dma_device_type": 1 00:12:19.004 }, 00:12:19.004 { 00:12:19.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.004 "dma_device_type": 2 00:12:19.004 } 00:12:19.004 ], 00:12:19.004 "driver_specific": {} 00:12:19.004 }' 00:12:19.004 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:19.004 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:19.004 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:19.004 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:19.004 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:19.262 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:19.262 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:19.262 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:19.262 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:19.262 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:19.262 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:19.262 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:19.262 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:19.519 [2024-05-14 23:27:42.720431] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.519 [2024-05-14 23:27:42.720460] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.519 [2024-05-14 23:27:42.720506] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.778 23:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.778 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:19.778 "name": "Existed_Raid", 00:12:19.778 "uuid": "8bb89a8e-ee12-455a-a64c-2d4e4d711972", 00:12:19.778 "strip_size_kb": 64, 00:12:19.778 "state": "offline", 00:12:19.778 "raid_level": "raid0", 00:12:19.778 "superblock": true, 00:12:19.778 "num_base_bdevs": 2, 00:12:19.778 "num_base_bdevs_discovered": 1, 00:12:19.778 "num_base_bdevs_operational": 1, 00:12:19.778 "base_bdevs_list": [ 00:12:19.778 { 00:12:19.778 "name": null, 00:12:19.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.778 "is_configured": false, 00:12:19.778 "data_offset": 2048, 00:12:19.778 "data_size": 63488 00:12:19.778 }, 00:12:19.778 { 00:12:19.778 "name": "BaseBdev2", 00:12:19.778 "uuid": "2953c70d-985c-4bfb-a745-c6696eac45a7", 00:12:19.778 "is_configured": true, 00:12:19.778 "data_offset": 2048, 00:12:19.778 "data_size": 63488 00:12:19.778 } 00:12:19.778 ] 00:12:19.778 }' 00:12:19.778 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:19.778 23:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.713 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:20.713 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.713 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.713 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:20.713 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:20.713 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.713 23:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:20.972 [2024-05-14 23:27:44.090811] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:20.972 [2024-05-14 23:27:44.090871] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:12:20.972 23:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.972 23:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.972 23:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.972 23:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 53267 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 53267 ']' 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 53267 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 53267 00:12:21.230 killing process with pid 53267 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53267' 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 53267 00:12:21.230 23:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 53267 00:12:21.230 [2024-05-14 23:27:44.392969] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.230 [2024-05-14 23:27:44.393098] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.601 23:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:12:22.601 00:12:22.601 real 0m11.207s 00:12:22.601 user 0m19.764s 00:12:22.601 sys 0m1.210s 00:12:22.601 23:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:22.601 ************************************ 00:12:22.601 END TEST raid_state_function_test_sb 00:12:22.601 ************************************ 00:12:22.601 23:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.601 23:27:45 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:12:22.601 23:27:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:22.601 23:27:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:22.601 23:27:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.601 ************************************ 00:12:22.601 START TEST raid_superblock_test 00:12:22.601 ************************************ 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=53644 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 53644 /var/tmp/spdk-raid.sock 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 53644 ']' 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:22.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:22.601 23:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.601 [2024-05-14 23:27:45.772732] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:12:22.601 [2024-05-14 23:27:45.772902] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53644 ] 00:12:22.859 [2024-05-14 23:27:45.930294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.859 [2024-05-14 23:27:46.126438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.116 [2024-05-14 23:27:46.323814] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:23.374 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:23.632 malloc1 00:12:23.632 23:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:23.890 [2024-05-14 23:27:47.028536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:23.890 [2024-05-14 23:27:47.028633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.890 [2024-05-14 23:27:47.028685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:12:23.890 [2024-05-14 23:27:47.028723] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.890 [2024-05-14 23:27:47.030348] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.890 [2024-05-14 23:27:47.030395] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:23.890 pt1 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:23.890 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:24.148 malloc2 00:12:24.148 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.407 [2024-05-14 23:27:47.476779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.407 [2024-05-14 23:27:47.476885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.407 [2024-05-14 23:27:47.476932] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:12:24.407 [2024-05-14 23:27:47.476971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.407 [2024-05-14 23:27:47.478886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.407 [2024-05-14 23:27:47.478976] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.407 pt2 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:12:24.407 [2024-05-14 23:27:47.672954] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:24.407 [2024-05-14 23:27:47.674801] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.407 [2024-05-14 23:27:47.675011] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:12:24.407 [2024-05-14 23:27:47.675027] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:24.407 [2024-05-14 23:27:47.675145] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:12:24.407 [2024-05-14 23:27:47.675505] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:12:24.407 [2024-05-14 23:27:47.675532] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:12:24.407 [2024-05-14 23:27:47.675651] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.407 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.665 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:24.665 "name": "raid_bdev1", 00:12:24.665 "uuid": "b05fbc99-e8f8-4d4e-89ec-f2199cae8824", 00:12:24.665 "strip_size_kb": 64, 00:12:24.665 "state": "online", 00:12:24.665 "raid_level": "raid0", 00:12:24.665 "superblock": true, 00:12:24.665 "num_base_bdevs": 2, 00:12:24.665 "num_base_bdevs_discovered": 2, 00:12:24.665 "num_base_bdevs_operational": 2, 00:12:24.665 "base_bdevs_list": [ 00:12:24.665 { 00:12:24.665 "name": "pt1", 00:12:24.665 "uuid": "076dea92-120c-5097-be0a-892b85e29448", 00:12:24.665 "is_configured": true, 00:12:24.665 "data_offset": 2048, 00:12:24.665 "data_size": 63488 00:12:24.665 }, 00:12:24.665 { 00:12:24.665 "name": "pt2", 00:12:24.665 "uuid": "9380302c-0a07-59f5-bfd0-e9a102a23cb5", 00:12:24.665 "is_configured": true, 00:12:24.665 "data_offset": 2048, 00:12:24.665 "data_size": 63488 00:12:24.665 } 00:12:24.665 ] 00:12:24.665 }' 00:12:24.665 23:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:24.665 23:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.231 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.231 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:12:25.231 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:25.231 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:25.231 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:25.231 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:25.231 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:25.231 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:25.489 [2024-05-14 23:27:48.721184] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.489 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:25.489 "name": "raid_bdev1", 00:12:25.489 "aliases": [ 00:12:25.489 "b05fbc99-e8f8-4d4e-89ec-f2199cae8824" 00:12:25.489 ], 00:12:25.489 "product_name": "Raid Volume", 00:12:25.489 "block_size": 512, 00:12:25.489 "num_blocks": 126976, 00:12:25.489 "uuid": "b05fbc99-e8f8-4d4e-89ec-f2199cae8824", 00:12:25.489 "assigned_rate_limits": { 00:12:25.489 "rw_ios_per_sec": 0, 00:12:25.489 "rw_mbytes_per_sec": 0, 00:12:25.489 "r_mbytes_per_sec": 0, 00:12:25.489 "w_mbytes_per_sec": 0 00:12:25.489 }, 00:12:25.489 "claimed": false, 00:12:25.489 "zoned": false, 00:12:25.489 "supported_io_types": { 00:12:25.489 "read": true, 00:12:25.489 "write": true, 00:12:25.489 "unmap": true, 00:12:25.489 "write_zeroes": true, 00:12:25.489 "flush": true, 00:12:25.489 "reset": true, 00:12:25.489 "compare": false, 00:12:25.489 "compare_and_write": false, 00:12:25.489 "abort": false, 00:12:25.489 "nvme_admin": false, 00:12:25.489 "nvme_io": false 00:12:25.489 }, 00:12:25.489 "memory_domains": [ 00:12:25.489 { 00:12:25.489 "dma_device_id": "system", 00:12:25.489 "dma_device_type": 1 00:12:25.489 }, 00:12:25.489 { 00:12:25.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.489 "dma_device_type": 2 00:12:25.489 }, 00:12:25.489 { 00:12:25.489 "dma_device_id": "system", 00:12:25.489 "dma_device_type": 1 00:12:25.489 }, 00:12:25.489 { 00:12:25.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.489 "dma_device_type": 2 00:12:25.489 } 00:12:25.489 ], 00:12:25.489 "driver_specific": { 00:12:25.489 "raid": { 00:12:25.489 "uuid": "b05fbc99-e8f8-4d4e-89ec-f2199cae8824", 00:12:25.489 "strip_size_kb": 64, 00:12:25.489 "state": "online", 00:12:25.489 "raid_level": "raid0", 00:12:25.489 "superblock": true, 00:12:25.489 "num_base_bdevs": 2, 00:12:25.489 "num_base_bdevs_discovered": 2, 00:12:25.489 "num_base_bdevs_operational": 2, 00:12:25.489 "base_bdevs_list": [ 00:12:25.489 { 00:12:25.489 "name": "pt1", 00:12:25.489 "uuid": "076dea92-120c-5097-be0a-892b85e29448", 00:12:25.489 "is_configured": true, 00:12:25.489 "data_offset": 2048, 00:12:25.489 "data_size": 63488 00:12:25.489 }, 00:12:25.489 { 00:12:25.489 "name": "pt2", 00:12:25.489 "uuid": "9380302c-0a07-59f5-bfd0-e9a102a23cb5", 00:12:25.489 "is_configured": true, 00:12:25.489 "data_offset": 2048, 00:12:25.489 "data_size": 63488 00:12:25.489 } 00:12:25.489 ] 00:12:25.489 } 00:12:25.489 } 00:12:25.489 }' 00:12:25.489 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.747 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:12:25.747 pt2' 00:12:25.747 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:25.747 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:25.747 23:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:26.004 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:26.004 "name": "pt1", 00:12:26.004 "aliases": [ 00:12:26.004 "076dea92-120c-5097-be0a-892b85e29448" 00:12:26.004 ], 00:12:26.004 "product_name": "passthru", 00:12:26.004 "block_size": 512, 00:12:26.004 "num_blocks": 65536, 00:12:26.004 "uuid": "076dea92-120c-5097-be0a-892b85e29448", 00:12:26.004 "assigned_rate_limits": { 00:12:26.004 "rw_ios_per_sec": 0, 00:12:26.004 "rw_mbytes_per_sec": 0, 00:12:26.004 "r_mbytes_per_sec": 0, 00:12:26.004 "w_mbytes_per_sec": 0 00:12:26.004 }, 00:12:26.004 "claimed": true, 00:12:26.004 "claim_type": "exclusive_write", 00:12:26.004 "zoned": false, 00:12:26.004 "supported_io_types": { 00:12:26.004 "read": true, 00:12:26.004 "write": true, 00:12:26.004 "unmap": true, 00:12:26.004 "write_zeroes": true, 00:12:26.004 "flush": true, 00:12:26.004 "reset": true, 00:12:26.004 "compare": false, 00:12:26.004 "compare_and_write": false, 00:12:26.004 "abort": true, 00:12:26.004 "nvme_admin": false, 00:12:26.004 "nvme_io": false 00:12:26.004 }, 00:12:26.005 "memory_domains": [ 00:12:26.005 { 00:12:26.005 "dma_device_id": "system", 00:12:26.005 "dma_device_type": 1 00:12:26.005 }, 00:12:26.005 { 00:12:26.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.005 "dma_device_type": 2 00:12:26.005 } 00:12:26.005 ], 00:12:26.005 "driver_specific": { 00:12:26.005 "passthru": { 00:12:26.005 "name": "pt1", 00:12:26.005 "base_bdev_name": "malloc1" 00:12:26.005 } 00:12:26.005 } 00:12:26.005 }' 00:12:26.005 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:26.005 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:26.005 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:26.005 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:26.005 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:26.005 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:26.005 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:26.263 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:26.263 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:26.263 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:26.263 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:26.263 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:26.263 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:26.263 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:26.263 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:26.521 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:26.521 "name": "pt2", 00:12:26.521 "aliases": [ 00:12:26.521 "9380302c-0a07-59f5-bfd0-e9a102a23cb5" 00:12:26.521 ], 00:12:26.521 "product_name": "passthru", 00:12:26.521 "block_size": 512, 00:12:26.521 "num_blocks": 65536, 00:12:26.521 "uuid": "9380302c-0a07-59f5-bfd0-e9a102a23cb5", 00:12:26.521 "assigned_rate_limits": { 00:12:26.521 "rw_ios_per_sec": 0, 00:12:26.521 "rw_mbytes_per_sec": 0, 00:12:26.521 "r_mbytes_per_sec": 0, 00:12:26.521 "w_mbytes_per_sec": 0 00:12:26.521 }, 00:12:26.521 "claimed": true, 00:12:26.521 "claim_type": "exclusive_write", 00:12:26.521 "zoned": false, 00:12:26.521 "supported_io_types": { 00:12:26.521 "read": true, 00:12:26.521 "write": true, 00:12:26.521 "unmap": true, 00:12:26.521 "write_zeroes": true, 00:12:26.521 "flush": true, 00:12:26.521 "reset": true, 00:12:26.521 "compare": false, 00:12:26.521 "compare_and_write": false, 00:12:26.521 "abort": true, 00:12:26.521 "nvme_admin": false, 00:12:26.521 "nvme_io": false 00:12:26.521 }, 00:12:26.521 "memory_domains": [ 00:12:26.521 { 00:12:26.521 "dma_device_id": "system", 00:12:26.521 "dma_device_type": 1 00:12:26.521 }, 00:12:26.521 { 00:12:26.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.521 "dma_device_type": 2 00:12:26.521 } 00:12:26.521 ], 00:12:26.521 "driver_specific": { 00:12:26.521 "passthru": { 00:12:26.521 "name": "pt2", 00:12:26.521 "base_bdev_name": "malloc2" 00:12:26.521 } 00:12:26.521 } 00:12:26.521 }' 00:12:26.521 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:26.521 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:26.779 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:26.779 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:26.779 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:26.779 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:26.779 23:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:26.779 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:27.050 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:27.050 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:27.050 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:27.050 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:27.050 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:27.050 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:27.319 [2024-05-14 23:27:50.429399] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.319 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b05fbc99-e8f8-4d4e-89ec-f2199cae8824 00:12:27.319 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b05fbc99-e8f8-4d4e-89ec-f2199cae8824 ']' 00:12:27.319 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:27.576 [2024-05-14 23:27:50.657326] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.576 [2024-05-14 23:27:50.657358] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.576 [2024-05-14 23:27:50.657444] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.576 [2024-05-14 23:27:50.657483] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.576 [2024-05-14 23:27:50.657493] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:12:27.576 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.576 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:27.834 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:27.834 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:27.834 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.834 23:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:27.834 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.834 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:28.093 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:28.093 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:28.351 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:12:28.609 [2024-05-14 23:27:51.685570] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:28.609 [2024-05-14 23:27:51.687490] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:28.609 [2024-05-14 23:27:51.687568] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:28.609 [2024-05-14 23:27:51.687661] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:28.609 [2024-05-14 23:27:51.687708] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.609 [2024-05-14 23:27:51.687724] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:12:28.609 request: 00:12:28.609 { 00:12:28.609 "name": "raid_bdev1", 00:12:28.609 "raid_level": "raid0", 00:12:28.609 "base_bdevs": [ 00:12:28.609 "malloc1", 00:12:28.609 "malloc2" 00:12:28.609 ], 00:12:28.609 "superblock": false, 00:12:28.609 "strip_size_kb": 64, 00:12:28.609 "method": "bdev_raid_create", 00:12:28.609 "req_id": 1 00:12:28.609 } 00:12:28.609 Got JSON-RPC error response 00:12:28.609 response: 00:12:28.609 { 00:12:28.609 "code": -17, 00:12:28.609 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:28.609 } 00:12:28.609 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:28.609 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:28.609 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:28.609 23:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:28.609 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.609 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:28.868 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:28.868 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:28.868 23:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:28.868 [2024-05-14 23:27:52.109564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:28.868 [2024-05-14 23:27:52.109687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.868 [2024-05-14 23:27:52.109730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:12:28.868 [2024-05-14 23:27:52.109758] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.868 [2024-05-14 23:27:52.111745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.868 [2024-05-14 23:27:52.111816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:28.868 [2024-05-14 23:27:52.111951] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:28.868 [2024-05-14 23:27:52.112012] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:28.868 pt1 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.868 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.127 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:29.127 "name": "raid_bdev1", 00:12:29.127 "uuid": "b05fbc99-e8f8-4d4e-89ec-f2199cae8824", 00:12:29.127 "strip_size_kb": 64, 00:12:29.127 "state": "configuring", 00:12:29.127 "raid_level": "raid0", 00:12:29.127 "superblock": true, 00:12:29.127 "num_base_bdevs": 2, 00:12:29.127 "num_base_bdevs_discovered": 1, 00:12:29.127 "num_base_bdevs_operational": 2, 00:12:29.127 "base_bdevs_list": [ 00:12:29.127 { 00:12:29.127 "name": "pt1", 00:12:29.127 "uuid": "076dea92-120c-5097-be0a-892b85e29448", 00:12:29.127 "is_configured": true, 00:12:29.127 "data_offset": 2048, 00:12:29.127 "data_size": 63488 00:12:29.127 }, 00:12:29.127 { 00:12:29.127 "name": null, 00:12:29.127 "uuid": "9380302c-0a07-59f5-bfd0-e9a102a23cb5", 00:12:29.127 "is_configured": false, 00:12:29.127 "data_offset": 2048, 00:12:29.127 "data_size": 63488 00:12:29.127 } 00:12:29.127 ] 00:12:29.127 }' 00:12:29.127 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:29.127 23:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.692 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:29.692 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:29.692 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:29.692 23:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:29.950 [2024-05-14 23:27:53.157755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:29.950 [2024-05-14 23:27:53.157883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.950 [2024-05-14 23:27:53.157940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:12:29.950 [2024-05-14 23:27:53.157967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.950 [2024-05-14 23:27:53.158560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.950 [2024-05-14 23:27:53.158603] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:29.950 [2024-05-14 23:27:53.158704] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:29.950 [2024-05-14 23:27:53.158730] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.950 [2024-05-14 23:27:53.158820] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:12:29.950 [2024-05-14 23:27:53.158833] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:29.950 [2024-05-14 23:27:53.158916] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:12:29.950 [2024-05-14 23:27:53.159128] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:12:29.950 [2024-05-14 23:27:53.159143] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:12:29.950 [2024-05-14 23:27:53.159287] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.950 pt2 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.950 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.208 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:30.208 "name": "raid_bdev1", 00:12:30.208 "uuid": "b05fbc99-e8f8-4d4e-89ec-f2199cae8824", 00:12:30.208 "strip_size_kb": 64, 00:12:30.208 "state": "online", 00:12:30.208 "raid_level": "raid0", 00:12:30.208 "superblock": true, 00:12:30.208 "num_base_bdevs": 2, 00:12:30.208 "num_base_bdevs_discovered": 2, 00:12:30.208 "num_base_bdevs_operational": 2, 00:12:30.208 "base_bdevs_list": [ 00:12:30.208 { 00:12:30.208 "name": "pt1", 00:12:30.208 "uuid": "076dea92-120c-5097-be0a-892b85e29448", 00:12:30.208 "is_configured": true, 00:12:30.208 "data_offset": 2048, 00:12:30.208 "data_size": 63488 00:12:30.208 }, 00:12:30.208 { 00:12:30.208 "name": "pt2", 00:12:30.208 "uuid": "9380302c-0a07-59f5-bfd0-e9a102a23cb5", 00:12:30.208 "is_configured": true, 00:12:30.208 "data_offset": 2048, 00:12:30.208 "data_size": 63488 00:12:30.208 } 00:12:30.208 ] 00:12:30.208 }' 00:12:30.208 23:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:30.208 23:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.774 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:30.774 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:12:30.774 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:30.774 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:30.774 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:30.774 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:30.774 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:30.774 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:31.033 [2024-05-14 23:27:54.258724] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.033 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:31.033 "name": "raid_bdev1", 00:12:31.033 "aliases": [ 00:12:31.033 "b05fbc99-e8f8-4d4e-89ec-f2199cae8824" 00:12:31.033 ], 00:12:31.033 "product_name": "Raid Volume", 00:12:31.033 "block_size": 512, 00:12:31.033 "num_blocks": 126976, 00:12:31.033 "uuid": "b05fbc99-e8f8-4d4e-89ec-f2199cae8824", 00:12:31.033 "assigned_rate_limits": { 00:12:31.033 "rw_ios_per_sec": 0, 00:12:31.033 "rw_mbytes_per_sec": 0, 00:12:31.033 "r_mbytes_per_sec": 0, 00:12:31.033 "w_mbytes_per_sec": 0 00:12:31.033 }, 00:12:31.033 "claimed": false, 00:12:31.033 "zoned": false, 00:12:31.033 "supported_io_types": { 00:12:31.033 "read": true, 00:12:31.033 "write": true, 00:12:31.033 "unmap": true, 00:12:31.033 "write_zeroes": true, 00:12:31.033 "flush": true, 00:12:31.033 "reset": true, 00:12:31.033 "compare": false, 00:12:31.033 "compare_and_write": false, 00:12:31.033 "abort": false, 00:12:31.033 "nvme_admin": false, 00:12:31.033 "nvme_io": false 00:12:31.033 }, 00:12:31.033 "memory_domains": [ 00:12:31.033 { 00:12:31.033 "dma_device_id": "system", 00:12:31.033 "dma_device_type": 1 00:12:31.033 }, 00:12:31.033 { 00:12:31.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.033 "dma_device_type": 2 00:12:31.033 }, 00:12:31.033 { 00:12:31.033 "dma_device_id": "system", 00:12:31.033 "dma_device_type": 1 00:12:31.033 }, 00:12:31.033 { 00:12:31.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.033 "dma_device_type": 2 00:12:31.033 } 00:12:31.033 ], 00:12:31.033 "driver_specific": { 00:12:31.033 "raid": { 00:12:31.033 "uuid": "b05fbc99-e8f8-4d4e-89ec-f2199cae8824", 00:12:31.033 "strip_size_kb": 64, 00:12:31.033 "state": "online", 00:12:31.033 "raid_level": "raid0", 00:12:31.033 "superblock": true, 00:12:31.033 "num_base_bdevs": 2, 00:12:31.033 "num_base_bdevs_discovered": 2, 00:12:31.033 "num_base_bdevs_operational": 2, 00:12:31.033 "base_bdevs_list": [ 00:12:31.033 { 00:12:31.033 "name": "pt1", 00:12:31.033 "uuid": "076dea92-120c-5097-be0a-892b85e29448", 00:12:31.033 "is_configured": true, 00:12:31.033 "data_offset": 2048, 00:12:31.033 "data_size": 63488 00:12:31.033 }, 00:12:31.033 { 00:12:31.033 "name": "pt2", 00:12:31.033 "uuid": "9380302c-0a07-59f5-bfd0-e9a102a23cb5", 00:12:31.033 "is_configured": true, 00:12:31.033 "data_offset": 2048, 00:12:31.033 "data_size": 63488 00:12:31.033 } 00:12:31.033 ] 00:12:31.033 } 00:12:31.033 } 00:12:31.033 }' 00:12:31.033 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.291 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:12:31.291 pt2' 00:12:31.291 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:31.291 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:31.291 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:31.291 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:31.291 "name": "pt1", 00:12:31.291 "aliases": [ 00:12:31.291 "076dea92-120c-5097-be0a-892b85e29448" 00:12:31.291 ], 00:12:31.291 "product_name": "passthru", 00:12:31.291 "block_size": 512, 00:12:31.291 "num_blocks": 65536, 00:12:31.291 "uuid": "076dea92-120c-5097-be0a-892b85e29448", 00:12:31.291 "assigned_rate_limits": { 00:12:31.291 "rw_ios_per_sec": 0, 00:12:31.291 "rw_mbytes_per_sec": 0, 00:12:31.291 "r_mbytes_per_sec": 0, 00:12:31.291 "w_mbytes_per_sec": 0 00:12:31.291 }, 00:12:31.291 "claimed": true, 00:12:31.291 "claim_type": "exclusive_write", 00:12:31.291 "zoned": false, 00:12:31.291 "supported_io_types": { 00:12:31.291 "read": true, 00:12:31.291 "write": true, 00:12:31.291 "unmap": true, 00:12:31.291 "write_zeroes": true, 00:12:31.291 "flush": true, 00:12:31.291 "reset": true, 00:12:31.291 "compare": false, 00:12:31.291 "compare_and_write": false, 00:12:31.291 "abort": true, 00:12:31.291 "nvme_admin": false, 00:12:31.291 "nvme_io": false 00:12:31.291 }, 00:12:31.291 "memory_domains": [ 00:12:31.291 { 00:12:31.291 "dma_device_id": "system", 00:12:31.291 "dma_device_type": 1 00:12:31.291 }, 00:12:31.291 { 00:12:31.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.291 "dma_device_type": 2 00:12:31.291 } 00:12:31.291 ], 00:12:31.291 "driver_specific": { 00:12:31.291 "passthru": { 00:12:31.291 "name": "pt1", 00:12:31.291 "base_bdev_name": "malloc1" 00:12:31.291 } 00:12:31.291 } 00:12:31.291 }' 00:12:31.291 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:31.595 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:31.595 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:31.595 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:31.595 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:31.595 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:31.595 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:31.595 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:31.853 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:31.853 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:31.853 23:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:31.853 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:31.853 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:31.853 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:31.853 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:32.112 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:32.112 "name": "pt2", 00:12:32.112 "aliases": [ 00:12:32.112 "9380302c-0a07-59f5-bfd0-e9a102a23cb5" 00:12:32.112 ], 00:12:32.112 "product_name": "passthru", 00:12:32.112 "block_size": 512, 00:12:32.112 "num_blocks": 65536, 00:12:32.112 "uuid": "9380302c-0a07-59f5-bfd0-e9a102a23cb5", 00:12:32.112 "assigned_rate_limits": { 00:12:32.112 "rw_ios_per_sec": 0, 00:12:32.112 "rw_mbytes_per_sec": 0, 00:12:32.112 "r_mbytes_per_sec": 0, 00:12:32.112 "w_mbytes_per_sec": 0 00:12:32.112 }, 00:12:32.112 "claimed": true, 00:12:32.112 "claim_type": "exclusive_write", 00:12:32.112 "zoned": false, 00:12:32.112 "supported_io_types": { 00:12:32.112 "read": true, 00:12:32.112 "write": true, 00:12:32.112 "unmap": true, 00:12:32.112 "write_zeroes": true, 00:12:32.112 "flush": true, 00:12:32.112 "reset": true, 00:12:32.112 "compare": false, 00:12:32.112 "compare_and_write": false, 00:12:32.112 "abort": true, 00:12:32.112 "nvme_admin": false, 00:12:32.112 "nvme_io": false 00:12:32.113 }, 00:12:32.113 "memory_domains": [ 00:12:32.113 { 00:12:32.113 "dma_device_id": "system", 00:12:32.113 "dma_device_type": 1 00:12:32.113 }, 00:12:32.113 { 00:12:32.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.113 "dma_device_type": 2 00:12:32.113 } 00:12:32.113 ], 00:12:32.113 "driver_specific": { 00:12:32.113 "passthru": { 00:12:32.113 "name": "pt2", 00:12:32.113 "base_bdev_name": "malloc2" 00:12:32.113 } 00:12:32.113 } 00:12:32.113 }' 00:12:32.113 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:32.113 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:32.371 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:32.371 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:32.371 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:32.371 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:32.371 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:32.371 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:32.371 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:32.371 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:32.629 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:32.629 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:32.629 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:32.629 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:32.887 [2024-05-14 23:27:55.975099] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.887 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b05fbc99-e8f8-4d4e-89ec-f2199cae8824 '!=' b05fbc99-e8f8-4d4e-89ec-f2199cae8824 ']' 00:12:32.887 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:32.887 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:32.887 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:12:32.887 23:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 53644 00:12:32.887 23:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 53644 ']' 00:12:32.887 23:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 53644 00:12:32.887 23:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:12:32.887 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:32.887 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 53644 00:12:32.887 killing process with pid 53644 00:12:32.887 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:32.887 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:32.887 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53644' 00:12:32.887 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 53644 00:12:32.887 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 53644 00:12:32.887 [2024-05-14 23:27:56.023734] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.887 [2024-05-14 23:27:56.023790] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.887 [2024-05-14 23:27:56.023822] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.887 [2024-05-14 23:27:56.023832] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:12:32.887 [2024-05-14 23:27:56.166722] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.266 ************************************ 00:12:34.266 END TEST raid_superblock_test 00:12:34.266 ************************************ 00:12:34.266 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:12:34.266 00:12:34.266 real 0m11.654s 00:12:34.266 user 0m20.872s 00:12:34.266 sys 0m1.168s 00:12:34.266 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:34.266 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.266 23:27:57 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:12:34.266 23:27:57 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:34.266 23:27:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:34.266 23:27:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:34.266 23:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.266 ************************************ 00:12:34.266 START TEST raid_state_function_test 00:12:34.266 ************************************ 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:12:34.266 Process raid pid: 54013 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=54013 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 54013' 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 54013 /var/tmp/spdk-raid.sock 00:12:34.266 23:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 54013 ']' 00:12:34.267 23:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:34.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:34.267 23:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.267 23:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:34.267 23:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.267 23:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.267 [2024-05-14 23:27:57.480612] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:12:34.267 [2024-05-14 23:27:57.480775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.526 [2024-05-14 23:27:57.642289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.784 [2024-05-14 23:27:57.843849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.784 [2024-05-14 23:27:58.031347] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.042 23:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:35.042 23:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:12:35.042 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:35.300 [2024-05-14 23:27:58.476802] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.300 [2024-05-14 23:27:58.476872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.300 [2024-05-14 23:27:58.476925] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.300 [2024-05-14 23:27:58.476954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.300 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.558 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:35.558 "name": "Existed_Raid", 00:12:35.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.558 "strip_size_kb": 64, 00:12:35.558 "state": "configuring", 00:12:35.558 "raid_level": "concat", 00:12:35.558 "superblock": false, 00:12:35.558 "num_base_bdevs": 2, 00:12:35.558 "num_base_bdevs_discovered": 0, 00:12:35.558 "num_base_bdevs_operational": 2, 00:12:35.558 "base_bdevs_list": [ 00:12:35.558 { 00:12:35.558 "name": "BaseBdev1", 00:12:35.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.558 "is_configured": false, 00:12:35.558 "data_offset": 0, 00:12:35.558 "data_size": 0 00:12:35.558 }, 00:12:35.558 { 00:12:35.558 "name": "BaseBdev2", 00:12:35.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.558 "is_configured": false, 00:12:35.558 "data_offset": 0, 00:12:35.558 "data_size": 0 00:12:35.558 } 00:12:35.558 ] 00:12:35.558 }' 00:12:35.559 23:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:35.559 23:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.126 23:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:36.385 [2024-05-14 23:27:59.492911] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.385 [2024-05-14 23:27:59.492946] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:12:36.385 23:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:36.644 [2024-05-14 23:27:59.680925] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:36.644 [2024-05-14 23:27:59.681020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:36.644 [2024-05-14 23:27:59.681068] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:36.644 [2024-05-14 23:27:59.681092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:36.644 23:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:36.902 BaseBdev1 00:12:36.902 [2024-05-14 23:27:59.950646] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.902 23:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:36.902 23:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:36.902 23:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:36.902 23:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:36.902 23:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:36.902 23:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:36.902 23:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:36.902 23:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.161 [ 00:12:37.161 { 00:12:37.161 "name": "BaseBdev1", 00:12:37.161 "aliases": [ 00:12:37.161 "6ad71335-7605-481d-9a3b-308d4910748a" 00:12:37.161 ], 00:12:37.161 "product_name": "Malloc disk", 00:12:37.161 "block_size": 512, 00:12:37.161 "num_blocks": 65536, 00:12:37.161 "uuid": "6ad71335-7605-481d-9a3b-308d4910748a", 00:12:37.161 "assigned_rate_limits": { 00:12:37.161 "rw_ios_per_sec": 0, 00:12:37.161 "rw_mbytes_per_sec": 0, 00:12:37.161 "r_mbytes_per_sec": 0, 00:12:37.161 "w_mbytes_per_sec": 0 00:12:37.161 }, 00:12:37.161 "claimed": true, 00:12:37.161 "claim_type": "exclusive_write", 00:12:37.161 "zoned": false, 00:12:37.161 "supported_io_types": { 00:12:37.161 "read": true, 00:12:37.161 "write": true, 00:12:37.161 "unmap": true, 00:12:37.161 "write_zeroes": true, 00:12:37.161 "flush": true, 00:12:37.161 "reset": true, 00:12:37.161 "compare": false, 00:12:37.161 "compare_and_write": false, 00:12:37.161 "abort": true, 00:12:37.161 "nvme_admin": false, 00:12:37.161 "nvme_io": false 00:12:37.161 }, 00:12:37.161 "memory_domains": [ 00:12:37.161 { 00:12:37.161 "dma_device_id": "system", 00:12:37.161 "dma_device_type": 1 00:12:37.161 }, 00:12:37.161 { 00:12:37.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.161 "dma_device_type": 2 00:12:37.161 } 00:12:37.161 ], 00:12:37.161 "driver_specific": {} 00:12:37.161 } 00:12:37.161 ] 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.161 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.420 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:37.420 "name": "Existed_Raid", 00:12:37.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.420 "strip_size_kb": 64, 00:12:37.420 "state": "configuring", 00:12:37.420 "raid_level": "concat", 00:12:37.420 "superblock": false, 00:12:37.420 "num_base_bdevs": 2, 00:12:37.420 "num_base_bdevs_discovered": 1, 00:12:37.420 "num_base_bdevs_operational": 2, 00:12:37.420 "base_bdevs_list": [ 00:12:37.420 { 00:12:37.420 "name": "BaseBdev1", 00:12:37.420 "uuid": "6ad71335-7605-481d-9a3b-308d4910748a", 00:12:37.420 "is_configured": true, 00:12:37.420 "data_offset": 0, 00:12:37.420 "data_size": 65536 00:12:37.420 }, 00:12:37.420 { 00:12:37.420 "name": "BaseBdev2", 00:12:37.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.420 "is_configured": false, 00:12:37.420 "data_offset": 0, 00:12:37.420 "data_size": 0 00:12:37.420 } 00:12:37.420 ] 00:12:37.420 }' 00:12:37.420 23:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:37.420 23:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.987 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:38.245 [2024-05-14 23:28:01.382887] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.245 [2024-05-14 23:28:01.382943] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:12:38.245 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:38.503 [2024-05-14 23:28:01.574939] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.503 [2024-05-14 23:28:01.576692] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.503 [2024-05-14 23:28:01.576751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:38.503 "name": "Existed_Raid", 00:12:38.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.503 "strip_size_kb": 64, 00:12:38.503 "state": "configuring", 00:12:38.503 "raid_level": "concat", 00:12:38.503 "superblock": false, 00:12:38.503 "num_base_bdevs": 2, 00:12:38.503 "num_base_bdevs_discovered": 1, 00:12:38.503 "num_base_bdevs_operational": 2, 00:12:38.503 "base_bdevs_list": [ 00:12:38.503 { 00:12:38.503 "name": "BaseBdev1", 00:12:38.503 "uuid": "6ad71335-7605-481d-9a3b-308d4910748a", 00:12:38.503 "is_configured": true, 00:12:38.503 "data_offset": 0, 00:12:38.503 "data_size": 65536 00:12:38.503 }, 00:12:38.503 { 00:12:38.503 "name": "BaseBdev2", 00:12:38.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.503 "is_configured": false, 00:12:38.503 "data_offset": 0, 00:12:38.503 "data_size": 0 00:12:38.503 } 00:12:38.503 ] 00:12:38.503 }' 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:38.503 23:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.441 23:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:39.441 [2024-05-14 23:28:02.698509] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.441 [2024-05-14 23:28:02.698565] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:12:39.441 [2024-05-14 23:28:02.698575] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:39.441 [2024-05-14 23:28:02.698725] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:12:39.441 [2024-05-14 23:28:02.698969] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:12:39.441 [2024-05-14 23:28:02.698999] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:12:39.441 BaseBdev2 00:12:39.441 [2024-05-14 23:28:02.699577] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.441 23:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:39.441 23:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:39.441 23:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:39.441 23:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:12:39.441 23:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:39.441 23:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:39.441 23:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:39.699 23:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:39.958 [ 00:12:39.958 { 00:12:39.958 "name": "BaseBdev2", 00:12:39.958 "aliases": [ 00:12:39.958 "f8eabb9b-b88a-4985-ab74-7e29246bd78e" 00:12:39.958 ], 00:12:39.958 "product_name": "Malloc disk", 00:12:39.958 "block_size": 512, 00:12:39.958 "num_blocks": 65536, 00:12:39.958 "uuid": "f8eabb9b-b88a-4985-ab74-7e29246bd78e", 00:12:39.958 "assigned_rate_limits": { 00:12:39.958 "rw_ios_per_sec": 0, 00:12:39.958 "rw_mbytes_per_sec": 0, 00:12:39.958 "r_mbytes_per_sec": 0, 00:12:39.958 "w_mbytes_per_sec": 0 00:12:39.958 }, 00:12:39.958 "claimed": true, 00:12:39.958 "claim_type": "exclusive_write", 00:12:39.958 "zoned": false, 00:12:39.958 "supported_io_types": { 00:12:39.958 "read": true, 00:12:39.958 "write": true, 00:12:39.958 "unmap": true, 00:12:39.958 "write_zeroes": true, 00:12:39.958 "flush": true, 00:12:39.958 "reset": true, 00:12:39.958 "compare": false, 00:12:39.958 "compare_and_write": false, 00:12:39.958 "abort": true, 00:12:39.958 "nvme_admin": false, 00:12:39.958 "nvme_io": false 00:12:39.958 }, 00:12:39.958 "memory_domains": [ 00:12:39.958 { 00:12:39.958 "dma_device_id": "system", 00:12:39.958 "dma_device_type": 1 00:12:39.958 }, 00:12:39.958 { 00:12:39.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.958 "dma_device_type": 2 00:12:39.958 } 00:12:39.958 ], 00:12:39.958 "driver_specific": {} 00:12:39.958 } 00:12:39.958 ] 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.958 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.216 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:40.216 "name": "Existed_Raid", 00:12:40.216 "uuid": "134c044e-d07a-44d6-ad8a-2e2b151d220e", 00:12:40.216 "strip_size_kb": 64, 00:12:40.216 "state": "online", 00:12:40.216 "raid_level": "concat", 00:12:40.216 "superblock": false, 00:12:40.216 "num_base_bdevs": 2, 00:12:40.216 "num_base_bdevs_discovered": 2, 00:12:40.216 "num_base_bdevs_operational": 2, 00:12:40.216 "base_bdevs_list": [ 00:12:40.216 { 00:12:40.216 "name": "BaseBdev1", 00:12:40.216 "uuid": "6ad71335-7605-481d-9a3b-308d4910748a", 00:12:40.216 "is_configured": true, 00:12:40.216 "data_offset": 0, 00:12:40.216 "data_size": 65536 00:12:40.216 }, 00:12:40.216 { 00:12:40.216 "name": "BaseBdev2", 00:12:40.216 "uuid": "f8eabb9b-b88a-4985-ab74-7e29246bd78e", 00:12:40.216 "is_configured": true, 00:12:40.216 "data_offset": 0, 00:12:40.216 "data_size": 65536 00:12:40.216 } 00:12:40.216 ] 00:12:40.216 }' 00:12:40.216 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:40.216 23:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.783 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.783 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:40.783 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:40.783 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:40.783 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:40.783 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:12:40.783 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:40.783 23:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:41.042 [2024-05-14 23:28:04.159648] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.042 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:41.042 "name": "Existed_Raid", 00:12:41.042 "aliases": [ 00:12:41.042 "134c044e-d07a-44d6-ad8a-2e2b151d220e" 00:12:41.042 ], 00:12:41.042 "product_name": "Raid Volume", 00:12:41.042 "block_size": 512, 00:12:41.042 "num_blocks": 131072, 00:12:41.042 "uuid": "134c044e-d07a-44d6-ad8a-2e2b151d220e", 00:12:41.042 "assigned_rate_limits": { 00:12:41.042 "rw_ios_per_sec": 0, 00:12:41.042 "rw_mbytes_per_sec": 0, 00:12:41.042 "r_mbytes_per_sec": 0, 00:12:41.042 "w_mbytes_per_sec": 0 00:12:41.042 }, 00:12:41.042 "claimed": false, 00:12:41.042 "zoned": false, 00:12:41.042 "supported_io_types": { 00:12:41.042 "read": true, 00:12:41.042 "write": true, 00:12:41.042 "unmap": true, 00:12:41.042 "write_zeroes": true, 00:12:41.042 "flush": true, 00:12:41.042 "reset": true, 00:12:41.042 "compare": false, 00:12:41.042 "compare_and_write": false, 00:12:41.042 "abort": false, 00:12:41.042 "nvme_admin": false, 00:12:41.042 "nvme_io": false 00:12:41.042 }, 00:12:41.042 "memory_domains": [ 00:12:41.042 { 00:12:41.042 "dma_device_id": "system", 00:12:41.042 "dma_device_type": 1 00:12:41.042 }, 00:12:41.042 { 00:12:41.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.042 "dma_device_type": 2 00:12:41.042 }, 00:12:41.042 { 00:12:41.042 "dma_device_id": "system", 00:12:41.042 "dma_device_type": 1 00:12:41.042 }, 00:12:41.042 { 00:12:41.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.042 "dma_device_type": 2 00:12:41.042 } 00:12:41.042 ], 00:12:41.042 "driver_specific": { 00:12:41.042 "raid": { 00:12:41.042 "uuid": "134c044e-d07a-44d6-ad8a-2e2b151d220e", 00:12:41.042 "strip_size_kb": 64, 00:12:41.042 "state": "online", 00:12:41.042 "raid_level": "concat", 00:12:41.042 "superblock": false, 00:12:41.042 "num_base_bdevs": 2, 00:12:41.042 "num_base_bdevs_discovered": 2, 00:12:41.042 "num_base_bdevs_operational": 2, 00:12:41.042 "base_bdevs_list": [ 00:12:41.042 { 00:12:41.042 "name": "BaseBdev1", 00:12:41.042 "uuid": "6ad71335-7605-481d-9a3b-308d4910748a", 00:12:41.042 "is_configured": true, 00:12:41.042 "data_offset": 0, 00:12:41.042 "data_size": 65536 00:12:41.042 }, 00:12:41.042 { 00:12:41.042 "name": "BaseBdev2", 00:12:41.042 "uuid": "f8eabb9b-b88a-4985-ab74-7e29246bd78e", 00:12:41.042 "is_configured": true, 00:12:41.042 "data_offset": 0, 00:12:41.042 "data_size": 65536 00:12:41.042 } 00:12:41.042 ] 00:12:41.042 } 00:12:41.042 } 00:12:41.042 }' 00:12:41.042 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:41.042 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:41.042 BaseBdev2' 00:12:41.042 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:41.042 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:41.042 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:41.300 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:41.300 "name": "BaseBdev1", 00:12:41.300 "aliases": [ 00:12:41.300 "6ad71335-7605-481d-9a3b-308d4910748a" 00:12:41.300 ], 00:12:41.300 "product_name": "Malloc disk", 00:12:41.300 "block_size": 512, 00:12:41.300 "num_blocks": 65536, 00:12:41.300 "uuid": "6ad71335-7605-481d-9a3b-308d4910748a", 00:12:41.300 "assigned_rate_limits": { 00:12:41.300 "rw_ios_per_sec": 0, 00:12:41.300 "rw_mbytes_per_sec": 0, 00:12:41.300 "r_mbytes_per_sec": 0, 00:12:41.300 "w_mbytes_per_sec": 0 00:12:41.300 }, 00:12:41.300 "claimed": true, 00:12:41.300 "claim_type": "exclusive_write", 00:12:41.300 "zoned": false, 00:12:41.300 "supported_io_types": { 00:12:41.300 "read": true, 00:12:41.300 "write": true, 00:12:41.300 "unmap": true, 00:12:41.300 "write_zeroes": true, 00:12:41.300 "flush": true, 00:12:41.300 "reset": true, 00:12:41.300 "compare": false, 00:12:41.300 "compare_and_write": false, 00:12:41.300 "abort": true, 00:12:41.300 "nvme_admin": false, 00:12:41.300 "nvme_io": false 00:12:41.300 }, 00:12:41.300 "memory_domains": [ 00:12:41.300 { 00:12:41.300 "dma_device_id": "system", 00:12:41.300 "dma_device_type": 1 00:12:41.300 }, 00:12:41.300 { 00:12:41.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.300 "dma_device_type": 2 00:12:41.300 } 00:12:41.300 ], 00:12:41.300 "driver_specific": {} 00:12:41.300 }' 00:12:41.300 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:41.300 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:41.300 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:41.300 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:41.558 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:41.558 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:41.558 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:41.558 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:41.558 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:41.558 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:41.816 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:41.816 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:41.816 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:41.816 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:41.816 23:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:42.074 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:42.074 "name": "BaseBdev2", 00:12:42.074 "aliases": [ 00:12:42.074 "f8eabb9b-b88a-4985-ab74-7e29246bd78e" 00:12:42.074 ], 00:12:42.074 "product_name": "Malloc disk", 00:12:42.074 "block_size": 512, 00:12:42.074 "num_blocks": 65536, 00:12:42.074 "uuid": "f8eabb9b-b88a-4985-ab74-7e29246bd78e", 00:12:42.074 "assigned_rate_limits": { 00:12:42.074 "rw_ios_per_sec": 0, 00:12:42.074 "rw_mbytes_per_sec": 0, 00:12:42.074 "r_mbytes_per_sec": 0, 00:12:42.074 "w_mbytes_per_sec": 0 00:12:42.074 }, 00:12:42.074 "claimed": true, 00:12:42.074 "claim_type": "exclusive_write", 00:12:42.074 "zoned": false, 00:12:42.074 "supported_io_types": { 00:12:42.074 "read": true, 00:12:42.074 "write": true, 00:12:42.074 "unmap": true, 00:12:42.074 "write_zeroes": true, 00:12:42.074 "flush": true, 00:12:42.074 "reset": true, 00:12:42.074 "compare": false, 00:12:42.074 "compare_and_write": false, 00:12:42.074 "abort": true, 00:12:42.074 "nvme_admin": false, 00:12:42.074 "nvme_io": false 00:12:42.074 }, 00:12:42.074 "memory_domains": [ 00:12:42.074 { 00:12:42.074 "dma_device_id": "system", 00:12:42.074 "dma_device_type": 1 00:12:42.074 }, 00:12:42.074 { 00:12:42.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.074 "dma_device_type": 2 00:12:42.074 } 00:12:42.074 ], 00:12:42.074 "driver_specific": {} 00:12:42.074 }' 00:12:42.074 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:42.074 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:42.074 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:42.074 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:42.074 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:42.074 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:42.074 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:42.332 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:42.332 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:42.332 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:42.332 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:42.332 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:42.332 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:42.590 [2024-05-14 23:28:05.791909] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.590 [2024-05-14 23:28:05.791949] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.590 [2024-05-14 23:28:05.792004] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:42.848 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:42.849 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:42.849 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:42.849 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:42.849 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:42.849 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.849 23:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.849 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:42.849 "name": "Existed_Raid", 00:12:42.849 "uuid": "134c044e-d07a-44d6-ad8a-2e2b151d220e", 00:12:42.849 "strip_size_kb": 64, 00:12:42.849 "state": "offline", 00:12:42.849 "raid_level": "concat", 00:12:42.849 "superblock": false, 00:12:42.849 "num_base_bdevs": 2, 00:12:42.849 "num_base_bdevs_discovered": 1, 00:12:42.849 "num_base_bdevs_operational": 1, 00:12:42.849 "base_bdevs_list": [ 00:12:42.849 { 00:12:42.849 "name": null, 00:12:42.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.849 "is_configured": false, 00:12:42.849 "data_offset": 0, 00:12:42.849 "data_size": 65536 00:12:42.849 }, 00:12:42.849 { 00:12:42.849 "name": "BaseBdev2", 00:12:42.849 "uuid": "f8eabb9b-b88a-4985-ab74-7e29246bd78e", 00:12:42.849 "is_configured": true, 00:12:42.849 "data_offset": 0, 00:12:42.849 "data_size": 65536 00:12:42.849 } 00:12:42.849 ] 00:12:42.849 }' 00:12:42.849 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:42.849 23:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.783 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:43.783 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.783 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.783 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:43.783 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:43.783 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.783 23:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:44.042 [2024-05-14 23:28:07.141917] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:44.042 [2024-05-14 23:28:07.141990] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:12:44.042 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:44.042 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:44.042 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:44.042 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 54013 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 54013 ']' 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 54013 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 54013 00:12:44.301 killing process with pid 54013 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54013' 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 54013 00:12:44.301 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 54013 00:12:44.301 [2024-05-14 23:28:07.493658] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.301 [2024-05-14 23:28:07.493774] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:12:45.675 00:12:45.675 real 0m11.311s 00:12:45.675 user 0m20.098s 00:12:45.675 sys 0m1.152s 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.675 ************************************ 00:12:45.675 END TEST raid_state_function_test 00:12:45.675 ************************************ 00:12:45.675 23:28:08 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:12:45.675 23:28:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:45.675 23:28:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.675 23:28:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.675 ************************************ 00:12:45.675 START TEST raid_state_function_test_sb 00:12:45.675 ************************************ 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:12:45.675 Process raid pid: 54404 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=54404 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 54404' 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 54404 /var/tmp/spdk-raid.sock 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:45.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 54404 ']' 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:45.675 23:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.675 [2024-05-14 23:28:08.854492] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:12:45.675 [2024-05-14 23:28:08.854790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.933 [2024-05-14 23:28:09.028714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.191 [2024-05-14 23:28:09.237311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.191 [2024-05-14 23:28:09.422006] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.450 23:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:46.450 23:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:12:46.450 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:46.708 [2024-05-14 23:28:09.947965] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.708 [2024-05-14 23:28:09.948036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.708 [2024-05-14 23:28:09.948066] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.708 [2024-05-14 23:28:09.948085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.708 23:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.007 23:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:47.007 "name": "Existed_Raid", 00:12:47.007 "uuid": "ee6da98a-dbb4-404a-90ca-a08b9865a1a1", 00:12:47.007 "strip_size_kb": 64, 00:12:47.007 "state": "configuring", 00:12:47.007 "raid_level": "concat", 00:12:47.007 "superblock": true, 00:12:47.007 "num_base_bdevs": 2, 00:12:47.007 "num_base_bdevs_discovered": 0, 00:12:47.007 "num_base_bdevs_operational": 2, 00:12:47.007 "base_bdevs_list": [ 00:12:47.007 { 00:12:47.007 "name": "BaseBdev1", 00:12:47.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.007 "is_configured": false, 00:12:47.007 "data_offset": 0, 00:12:47.007 "data_size": 0 00:12:47.007 }, 00:12:47.007 { 00:12:47.007 "name": "BaseBdev2", 00:12:47.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.007 "is_configured": false, 00:12:47.007 "data_offset": 0, 00:12:47.007 "data_size": 0 00:12:47.007 } 00:12:47.007 ] 00:12:47.007 }' 00:12:47.007 23:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:47.007 23:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.605 23:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:47.863 [2024-05-14 23:28:11.000119] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.863 [2024-05-14 23:28:11.000408] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:12:47.863 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:48.122 [2024-05-14 23:28:11.188173] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.122 [2024-05-14 23:28:11.188301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.122 [2024-05-14 23:28:11.188332] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.122 [2024-05-14 23:28:11.188357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.123 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.382 [2024-05-14 23:28:11.414910] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.382 BaseBdev1 00:12:48.382 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:12:48.382 23:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:12:48.382 23:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:48.382 23:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:48.382 23:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:48.382 23:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:48.382 23:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.382 23:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.639 [ 00:12:48.639 { 00:12:48.639 "name": "BaseBdev1", 00:12:48.639 "aliases": [ 00:12:48.639 "213882da-3e6e-43cf-99af-412fd28ec60f" 00:12:48.639 ], 00:12:48.639 "product_name": "Malloc disk", 00:12:48.640 "block_size": 512, 00:12:48.640 "num_blocks": 65536, 00:12:48.640 "uuid": "213882da-3e6e-43cf-99af-412fd28ec60f", 00:12:48.640 "assigned_rate_limits": { 00:12:48.640 "rw_ios_per_sec": 0, 00:12:48.640 "rw_mbytes_per_sec": 0, 00:12:48.640 "r_mbytes_per_sec": 0, 00:12:48.640 "w_mbytes_per_sec": 0 00:12:48.640 }, 00:12:48.640 "claimed": true, 00:12:48.640 "claim_type": "exclusive_write", 00:12:48.640 "zoned": false, 00:12:48.640 "supported_io_types": { 00:12:48.640 "read": true, 00:12:48.640 "write": true, 00:12:48.640 "unmap": true, 00:12:48.640 "write_zeroes": true, 00:12:48.640 "flush": true, 00:12:48.640 "reset": true, 00:12:48.640 "compare": false, 00:12:48.640 "compare_and_write": false, 00:12:48.640 "abort": true, 00:12:48.640 "nvme_admin": false, 00:12:48.640 "nvme_io": false 00:12:48.640 }, 00:12:48.640 "memory_domains": [ 00:12:48.640 { 00:12:48.640 "dma_device_id": "system", 00:12:48.640 "dma_device_type": 1 00:12:48.640 }, 00:12:48.640 { 00:12:48.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.640 "dma_device_type": 2 00:12:48.640 } 00:12:48.640 ], 00:12:48.640 "driver_specific": {} 00:12:48.640 } 00:12:48.640 ] 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.640 23:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.898 23:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:48.898 "name": "Existed_Raid", 00:12:48.898 "uuid": "ddb0767f-76f3-4f92-9a02-d3b435444e00", 00:12:48.898 "strip_size_kb": 64, 00:12:48.898 "state": "configuring", 00:12:48.898 "raid_level": "concat", 00:12:48.898 "superblock": true, 00:12:48.898 "num_base_bdevs": 2, 00:12:48.898 "num_base_bdevs_discovered": 1, 00:12:48.898 "num_base_bdevs_operational": 2, 00:12:48.898 "base_bdevs_list": [ 00:12:48.898 { 00:12:48.898 "name": "BaseBdev1", 00:12:48.898 "uuid": "213882da-3e6e-43cf-99af-412fd28ec60f", 00:12:48.898 "is_configured": true, 00:12:48.898 "data_offset": 2048, 00:12:48.898 "data_size": 63488 00:12:48.898 }, 00:12:48.898 { 00:12:48.898 "name": "BaseBdev2", 00:12:48.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.898 "is_configured": false, 00:12:48.898 "data_offset": 0, 00:12:48.898 "data_size": 0 00:12:48.898 } 00:12:48.898 ] 00:12:48.898 }' 00:12:48.898 23:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:48.898 23:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.831 23:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:49.831 [2024-05-14 23:28:13.003227] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.831 [2024-05-14 23:28:13.003296] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:12:49.831 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:50.088 [2024-05-14 23:28:13.247335] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.088 [2024-05-14 23:28:13.249014] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.088 [2024-05-14 23:28:13.249072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.088 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.345 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:50.345 "name": "Existed_Raid", 00:12:50.345 "uuid": "b1840719-47c4-4cd4-842b-8dd0feaf032c", 00:12:50.345 "strip_size_kb": 64, 00:12:50.345 "state": "configuring", 00:12:50.345 "raid_level": "concat", 00:12:50.345 "superblock": true, 00:12:50.345 "num_base_bdevs": 2, 00:12:50.345 "num_base_bdevs_discovered": 1, 00:12:50.345 "num_base_bdevs_operational": 2, 00:12:50.345 "base_bdevs_list": [ 00:12:50.345 { 00:12:50.345 "name": "BaseBdev1", 00:12:50.345 "uuid": "213882da-3e6e-43cf-99af-412fd28ec60f", 00:12:50.345 "is_configured": true, 00:12:50.345 "data_offset": 2048, 00:12:50.345 "data_size": 63488 00:12:50.345 }, 00:12:50.345 { 00:12:50.345 "name": "BaseBdev2", 00:12:50.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.345 "is_configured": false, 00:12:50.345 "data_offset": 0, 00:12:50.345 "data_size": 0 00:12:50.345 } 00:12:50.345 ] 00:12:50.345 }' 00:12:50.345 23:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:50.345 23:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:51.280 BaseBdev2 00:12:51.280 [2024-05-14 23:28:14.484541] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.280 [2024-05-14 23:28:14.484725] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:12:51.280 [2024-05-14 23:28:14.484739] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:51.281 [2024-05-14 23:28:14.484844] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:12:51.281 [2024-05-14 23:28:14.485073] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:12:51.281 [2024-05-14 23:28:14.485088] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:12:51.281 [2024-05-14 23:28:14.485447] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.281 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:12:51.281 23:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:12:51.281 23:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:51.281 23:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:12:51.281 23:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:51.281 23:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:51.281 23:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:51.538 23:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:51.797 [ 00:12:51.797 { 00:12:51.797 "name": "BaseBdev2", 00:12:51.797 "aliases": [ 00:12:51.797 "8961fcf7-8026-483d-b25f-fbbf9209a749" 00:12:51.797 ], 00:12:51.797 "product_name": "Malloc disk", 00:12:51.797 "block_size": 512, 00:12:51.797 "num_blocks": 65536, 00:12:51.797 "uuid": "8961fcf7-8026-483d-b25f-fbbf9209a749", 00:12:51.797 "assigned_rate_limits": { 00:12:51.797 "rw_ios_per_sec": 0, 00:12:51.797 "rw_mbytes_per_sec": 0, 00:12:51.797 "r_mbytes_per_sec": 0, 00:12:51.797 "w_mbytes_per_sec": 0 00:12:51.797 }, 00:12:51.797 "claimed": true, 00:12:51.797 "claim_type": "exclusive_write", 00:12:51.797 "zoned": false, 00:12:51.797 "supported_io_types": { 00:12:51.797 "read": true, 00:12:51.797 "write": true, 00:12:51.797 "unmap": true, 00:12:51.797 "write_zeroes": true, 00:12:51.797 "flush": true, 00:12:51.797 "reset": true, 00:12:51.797 "compare": false, 00:12:51.797 "compare_and_write": false, 00:12:51.797 "abort": true, 00:12:51.797 "nvme_admin": false, 00:12:51.797 "nvme_io": false 00:12:51.797 }, 00:12:51.797 "memory_domains": [ 00:12:51.797 { 00:12:51.797 "dma_device_id": "system", 00:12:51.797 "dma_device_type": 1 00:12:51.797 }, 00:12:51.797 { 00:12:51.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.797 "dma_device_type": 2 00:12:51.797 } 00:12:51.797 ], 00:12:51.797 "driver_specific": {} 00:12:51.797 } 00:12:51.797 ] 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.797 23:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.797 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:51.797 "name": "Existed_Raid", 00:12:51.797 "uuid": "b1840719-47c4-4cd4-842b-8dd0feaf032c", 00:12:51.797 "strip_size_kb": 64, 00:12:51.797 "state": "online", 00:12:51.797 "raid_level": "concat", 00:12:51.797 "superblock": true, 00:12:51.797 "num_base_bdevs": 2, 00:12:51.797 "num_base_bdevs_discovered": 2, 00:12:51.797 "num_base_bdevs_operational": 2, 00:12:51.797 "base_bdevs_list": [ 00:12:51.797 { 00:12:51.797 "name": "BaseBdev1", 00:12:51.797 "uuid": "213882da-3e6e-43cf-99af-412fd28ec60f", 00:12:51.797 "is_configured": true, 00:12:51.797 "data_offset": 2048, 00:12:51.797 "data_size": 63488 00:12:51.797 }, 00:12:51.797 { 00:12:51.797 "name": "BaseBdev2", 00:12:51.797 "uuid": "8961fcf7-8026-483d-b25f-fbbf9209a749", 00:12:51.797 "is_configured": true, 00:12:51.797 "data_offset": 2048, 00:12:51.797 "data_size": 63488 00:12:51.797 } 00:12:51.797 ] 00:12:51.797 }' 00:12:51.797 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:51.797 23:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:12:52.732 [2024-05-14 23:28:15.852980] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:12:52.732 "name": "Existed_Raid", 00:12:52.732 "aliases": [ 00:12:52.732 "b1840719-47c4-4cd4-842b-8dd0feaf032c" 00:12:52.732 ], 00:12:52.732 "product_name": "Raid Volume", 00:12:52.732 "block_size": 512, 00:12:52.732 "num_blocks": 126976, 00:12:52.732 "uuid": "b1840719-47c4-4cd4-842b-8dd0feaf032c", 00:12:52.732 "assigned_rate_limits": { 00:12:52.732 "rw_ios_per_sec": 0, 00:12:52.732 "rw_mbytes_per_sec": 0, 00:12:52.732 "r_mbytes_per_sec": 0, 00:12:52.732 "w_mbytes_per_sec": 0 00:12:52.732 }, 00:12:52.732 "claimed": false, 00:12:52.732 "zoned": false, 00:12:52.732 "supported_io_types": { 00:12:52.732 "read": true, 00:12:52.732 "write": true, 00:12:52.732 "unmap": true, 00:12:52.732 "write_zeroes": true, 00:12:52.732 "flush": true, 00:12:52.732 "reset": true, 00:12:52.732 "compare": false, 00:12:52.732 "compare_and_write": false, 00:12:52.732 "abort": false, 00:12:52.732 "nvme_admin": false, 00:12:52.732 "nvme_io": false 00:12:52.732 }, 00:12:52.732 "memory_domains": [ 00:12:52.732 { 00:12:52.732 "dma_device_id": "system", 00:12:52.732 "dma_device_type": 1 00:12:52.732 }, 00:12:52.732 { 00:12:52.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.732 "dma_device_type": 2 00:12:52.732 }, 00:12:52.732 { 00:12:52.732 "dma_device_id": "system", 00:12:52.732 "dma_device_type": 1 00:12:52.732 }, 00:12:52.732 { 00:12:52.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.732 "dma_device_type": 2 00:12:52.732 } 00:12:52.732 ], 00:12:52.732 "driver_specific": { 00:12:52.732 "raid": { 00:12:52.732 "uuid": "b1840719-47c4-4cd4-842b-8dd0feaf032c", 00:12:52.732 "strip_size_kb": 64, 00:12:52.732 "state": "online", 00:12:52.732 "raid_level": "concat", 00:12:52.732 "superblock": true, 00:12:52.732 "num_base_bdevs": 2, 00:12:52.732 "num_base_bdevs_discovered": 2, 00:12:52.732 "num_base_bdevs_operational": 2, 00:12:52.732 "base_bdevs_list": [ 00:12:52.732 { 00:12:52.732 "name": "BaseBdev1", 00:12:52.732 "uuid": "213882da-3e6e-43cf-99af-412fd28ec60f", 00:12:52.732 "is_configured": true, 00:12:52.732 "data_offset": 2048, 00:12:52.732 "data_size": 63488 00:12:52.732 }, 00:12:52.732 { 00:12:52.732 "name": "BaseBdev2", 00:12:52.732 "uuid": "8961fcf7-8026-483d-b25f-fbbf9209a749", 00:12:52.732 "is_configured": true, 00:12:52.732 "data_offset": 2048, 00:12:52.732 "data_size": 63488 00:12:52.732 } 00:12:52.732 ] 00:12:52.732 } 00:12:52.732 } 00:12:52.732 }' 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:12:52.732 BaseBdev2' 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:52.732 23:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:52.991 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:52.991 "name": "BaseBdev1", 00:12:52.991 "aliases": [ 00:12:52.991 "213882da-3e6e-43cf-99af-412fd28ec60f" 00:12:52.991 ], 00:12:52.991 "product_name": "Malloc disk", 00:12:52.991 "block_size": 512, 00:12:52.991 "num_blocks": 65536, 00:12:52.991 "uuid": "213882da-3e6e-43cf-99af-412fd28ec60f", 00:12:52.991 "assigned_rate_limits": { 00:12:52.991 "rw_ios_per_sec": 0, 00:12:52.991 "rw_mbytes_per_sec": 0, 00:12:52.991 "r_mbytes_per_sec": 0, 00:12:52.991 "w_mbytes_per_sec": 0 00:12:52.991 }, 00:12:52.991 "claimed": true, 00:12:52.991 "claim_type": "exclusive_write", 00:12:52.991 "zoned": false, 00:12:52.991 "supported_io_types": { 00:12:52.991 "read": true, 00:12:52.991 "write": true, 00:12:52.991 "unmap": true, 00:12:52.991 "write_zeroes": true, 00:12:52.991 "flush": true, 00:12:52.991 "reset": true, 00:12:52.991 "compare": false, 00:12:52.991 "compare_and_write": false, 00:12:52.991 "abort": true, 00:12:52.991 "nvme_admin": false, 00:12:52.991 "nvme_io": false 00:12:52.991 }, 00:12:52.991 "memory_domains": [ 00:12:52.991 { 00:12:52.991 "dma_device_id": "system", 00:12:52.991 "dma_device_type": 1 00:12:52.991 }, 00:12:52.991 { 00:12:52.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.991 "dma_device_type": 2 00:12:52.991 } 00:12:52.991 ], 00:12:52.991 "driver_specific": {} 00:12:52.991 }' 00:12:52.991 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:52.991 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:52.991 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:52.991 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:53.249 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:12:53.570 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:12:53.570 "name": "BaseBdev2", 00:12:53.570 "aliases": [ 00:12:53.570 "8961fcf7-8026-483d-b25f-fbbf9209a749" 00:12:53.570 ], 00:12:53.570 "product_name": "Malloc disk", 00:12:53.570 "block_size": 512, 00:12:53.570 "num_blocks": 65536, 00:12:53.570 "uuid": "8961fcf7-8026-483d-b25f-fbbf9209a749", 00:12:53.570 "assigned_rate_limits": { 00:12:53.570 "rw_ios_per_sec": 0, 00:12:53.570 "rw_mbytes_per_sec": 0, 00:12:53.570 "r_mbytes_per_sec": 0, 00:12:53.570 "w_mbytes_per_sec": 0 00:12:53.570 }, 00:12:53.570 "claimed": true, 00:12:53.570 "claim_type": "exclusive_write", 00:12:53.570 "zoned": false, 00:12:53.570 "supported_io_types": { 00:12:53.570 "read": true, 00:12:53.570 "write": true, 00:12:53.570 "unmap": true, 00:12:53.570 "write_zeroes": true, 00:12:53.570 "flush": true, 00:12:53.570 "reset": true, 00:12:53.570 "compare": false, 00:12:53.570 "compare_and_write": false, 00:12:53.570 "abort": true, 00:12:53.570 "nvme_admin": false, 00:12:53.570 "nvme_io": false 00:12:53.570 }, 00:12:53.570 "memory_domains": [ 00:12:53.570 { 00:12:53.570 "dma_device_id": "system", 00:12:53.570 "dma_device_type": 1 00:12:53.570 }, 00:12:53.570 { 00:12:53.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.570 "dma_device_type": 2 00:12:53.570 } 00:12:53.570 ], 00:12:53.570 "driver_specific": {} 00:12:53.570 }' 00:12:53.570 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:53.570 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:12:53.827 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:12:53.827 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:53.827 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:12:53.827 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.827 23:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:53.827 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:12:53.827 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.827 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:53.827 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:12:54.085 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:12:54.085 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:54.085 [2024-05-14 23:28:17.321151] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.085 [2024-05-14 23:28:17.321199] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.085 [2024-05-14 23:28:17.321471] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:54.382 "name": "Existed_Raid", 00:12:54.382 "uuid": "b1840719-47c4-4cd4-842b-8dd0feaf032c", 00:12:54.382 "strip_size_kb": 64, 00:12:54.382 "state": "offline", 00:12:54.382 "raid_level": "concat", 00:12:54.382 "superblock": true, 00:12:54.382 "num_base_bdevs": 2, 00:12:54.382 "num_base_bdevs_discovered": 1, 00:12:54.382 "num_base_bdevs_operational": 1, 00:12:54.382 "base_bdevs_list": [ 00:12:54.382 { 00:12:54.382 "name": null, 00:12:54.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.382 "is_configured": false, 00:12:54.382 "data_offset": 2048, 00:12:54.382 "data_size": 63488 00:12:54.382 }, 00:12:54.382 { 00:12:54.382 "name": "BaseBdev2", 00:12:54.382 "uuid": "8961fcf7-8026-483d-b25f-fbbf9209a749", 00:12:54.382 "is_configured": true, 00:12:54.382 "data_offset": 2048, 00:12:54.382 "data_size": 63488 00:12:54.382 } 00:12:54.382 ] 00:12:54.382 }' 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:54.382 23:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.315 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:55.315 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.315 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.315 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:12:55.571 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:12:55.572 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.572 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:55.572 [2024-05-14 23:28:18.845376] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.572 [2024-05-14 23:28:18.845434] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:12:55.828 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.828 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.828 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.828 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 54404 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 54404 ']' 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 54404 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 54404 00:12:56.087 killing process with pid 54404 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54404' 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 54404 00:12:56.087 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 54404 00:12:56.087 [2024-05-14 23:28:19.154525] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.087 [2024-05-14 23:28:19.154668] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.462 ************************************ 00:12:57.462 END TEST raid_state_function_test_sb 00:12:57.462 ************************************ 00:12:57.462 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:12:57.462 00:12:57.462 real 0m11.690s 00:12:57.462 user 0m20.731s 00:12:57.462 sys 0m1.222s 00:12:57.462 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.462 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.462 23:28:20 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:57.462 23:28:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:57.462 23:28:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.462 23:28:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.462 ************************************ 00:12:57.462 START TEST raid_superblock_test 00:12:57.462 ************************************ 00:12:57.462 23:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=54778 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 54778 /var/tmp/spdk-raid.sock 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 54778 ']' 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.463 23:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.463 [2024-05-14 23:28:20.612049] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:12:57.463 [2024-05-14 23:28:20.612580] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54778 ] 00:12:57.721 [2024-05-14 23:28:20.777320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.980 [2024-05-14 23:28:21.011828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.980 [2024-05-14 23:28:21.213990] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.239 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:58.497 malloc1 00:12:58.497 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:58.756 [2024-05-14 23:28:21.897929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:58.756 [2024-05-14 23:28:21.898042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.756 [2024-05-14 23:28:21.898104] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:12:58.756 [2024-05-14 23:28:21.898308] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.756 [2024-05-14 23:28:21.901749] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.756 [2024-05-14 23:28:21.901862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:58.756 pt1 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.756 23:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:59.013 malloc2 00:12:59.013 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.272 [2024-05-14 23:28:22.335716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.272 [2024-05-14 23:28:22.335801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.272 [2024-05-14 23:28:22.335849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:12:59.272 [2024-05-14 23:28:22.335890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.272 [2024-05-14 23:28:22.337699] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.272 [2024-05-14 23:28:22.337754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.272 pt2 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:12:59.272 [2024-05-14 23:28:22.527832] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:59.272 [2024-05-14 23:28:22.529450] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.272 [2024-05-14 23:28:22.529587] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:12:59.272 [2024-05-14 23:28:22.529603] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:59.272 [2024-05-14 23:28:22.529734] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:12:59.272 [2024-05-14 23:28:22.529998] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:12:59.272 [2024-05-14 23:28:22.530015] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:12:59.272 [2024-05-14 23:28:22.530127] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.272 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.530 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:59.530 "name": "raid_bdev1", 00:12:59.530 "uuid": "b7f2dbc4-4695-403e-8770-60c329ca8e69", 00:12:59.530 "strip_size_kb": 64, 00:12:59.530 "state": "online", 00:12:59.530 "raid_level": "concat", 00:12:59.530 "superblock": true, 00:12:59.530 "num_base_bdevs": 2, 00:12:59.530 "num_base_bdevs_discovered": 2, 00:12:59.530 "num_base_bdevs_operational": 2, 00:12:59.530 "base_bdevs_list": [ 00:12:59.530 { 00:12:59.530 "name": "pt1", 00:12:59.530 "uuid": "e05e094c-2ece-5574-9e67-1e55dcfe4acb", 00:12:59.530 "is_configured": true, 00:12:59.530 "data_offset": 2048, 00:12:59.530 "data_size": 63488 00:12:59.530 }, 00:12:59.530 { 00:12:59.530 "name": "pt2", 00:12:59.530 "uuid": "8976fd46-429f-5187-8a6d-a1c5ffe26d43", 00:12:59.530 "is_configured": true, 00:12:59.530 "data_offset": 2048, 00:12:59.530 "data_size": 63488 00:12:59.530 } 00:12:59.530 ] 00:12:59.530 }' 00:12:59.530 23:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:59.530 23:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:00.462 [2024-05-14 23:28:23.596046] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.462 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:00.462 "name": "raid_bdev1", 00:13:00.462 "aliases": [ 00:13:00.462 "b7f2dbc4-4695-403e-8770-60c329ca8e69" 00:13:00.462 ], 00:13:00.462 "product_name": "Raid Volume", 00:13:00.462 "block_size": 512, 00:13:00.462 "num_blocks": 126976, 00:13:00.462 "uuid": "b7f2dbc4-4695-403e-8770-60c329ca8e69", 00:13:00.462 "assigned_rate_limits": { 00:13:00.462 "rw_ios_per_sec": 0, 00:13:00.462 "rw_mbytes_per_sec": 0, 00:13:00.462 "r_mbytes_per_sec": 0, 00:13:00.462 "w_mbytes_per_sec": 0 00:13:00.462 }, 00:13:00.462 "claimed": false, 00:13:00.462 "zoned": false, 00:13:00.462 "supported_io_types": { 00:13:00.462 "read": true, 00:13:00.462 "write": true, 00:13:00.462 "unmap": true, 00:13:00.462 "write_zeroes": true, 00:13:00.462 "flush": true, 00:13:00.462 "reset": true, 00:13:00.462 "compare": false, 00:13:00.462 "compare_and_write": false, 00:13:00.462 "abort": false, 00:13:00.462 "nvme_admin": false, 00:13:00.462 "nvme_io": false 00:13:00.462 }, 00:13:00.462 "memory_domains": [ 00:13:00.462 { 00:13:00.462 "dma_device_id": "system", 00:13:00.462 "dma_device_type": 1 00:13:00.462 }, 00:13:00.462 { 00:13:00.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.462 "dma_device_type": 2 00:13:00.462 }, 00:13:00.462 { 00:13:00.462 "dma_device_id": "system", 00:13:00.462 "dma_device_type": 1 00:13:00.462 }, 00:13:00.462 { 00:13:00.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.462 "dma_device_type": 2 00:13:00.462 } 00:13:00.462 ], 00:13:00.462 "driver_specific": { 00:13:00.462 "raid": { 00:13:00.462 "uuid": "b7f2dbc4-4695-403e-8770-60c329ca8e69", 00:13:00.462 "strip_size_kb": 64, 00:13:00.462 "state": "online", 00:13:00.462 "raid_level": "concat", 00:13:00.462 "superblock": true, 00:13:00.463 "num_base_bdevs": 2, 00:13:00.463 "num_base_bdevs_discovered": 2, 00:13:00.463 "num_base_bdevs_operational": 2, 00:13:00.463 "base_bdevs_list": [ 00:13:00.463 { 00:13:00.463 "name": "pt1", 00:13:00.463 "uuid": "e05e094c-2ece-5574-9e67-1e55dcfe4acb", 00:13:00.463 "is_configured": true, 00:13:00.463 "data_offset": 2048, 00:13:00.463 "data_size": 63488 00:13:00.463 }, 00:13:00.463 { 00:13:00.463 "name": "pt2", 00:13:00.463 "uuid": "8976fd46-429f-5187-8a6d-a1c5ffe26d43", 00:13:00.463 "is_configured": true, 00:13:00.463 "data_offset": 2048, 00:13:00.463 "data_size": 63488 00:13:00.463 } 00:13:00.463 ] 00:13:00.463 } 00:13:00.463 } 00:13:00.463 }' 00:13:00.463 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:00.463 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:00.463 pt2' 00:13:00.463 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:00.463 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:00.463 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:00.720 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:00.720 "name": "pt1", 00:13:00.720 "aliases": [ 00:13:00.720 "e05e094c-2ece-5574-9e67-1e55dcfe4acb" 00:13:00.720 ], 00:13:00.720 "product_name": "passthru", 00:13:00.720 "block_size": 512, 00:13:00.720 "num_blocks": 65536, 00:13:00.720 "uuid": "e05e094c-2ece-5574-9e67-1e55dcfe4acb", 00:13:00.720 "assigned_rate_limits": { 00:13:00.720 "rw_ios_per_sec": 0, 00:13:00.720 "rw_mbytes_per_sec": 0, 00:13:00.720 "r_mbytes_per_sec": 0, 00:13:00.720 "w_mbytes_per_sec": 0 00:13:00.720 }, 00:13:00.720 "claimed": true, 00:13:00.720 "claim_type": "exclusive_write", 00:13:00.720 "zoned": false, 00:13:00.720 "supported_io_types": { 00:13:00.720 "read": true, 00:13:00.720 "write": true, 00:13:00.720 "unmap": true, 00:13:00.720 "write_zeroes": true, 00:13:00.720 "flush": true, 00:13:00.720 "reset": true, 00:13:00.720 "compare": false, 00:13:00.720 "compare_and_write": false, 00:13:00.720 "abort": true, 00:13:00.720 "nvme_admin": false, 00:13:00.720 "nvme_io": false 00:13:00.720 }, 00:13:00.720 "memory_domains": [ 00:13:00.720 { 00:13:00.720 "dma_device_id": "system", 00:13:00.720 "dma_device_type": 1 00:13:00.720 }, 00:13:00.720 { 00:13:00.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.720 "dma_device_type": 2 00:13:00.720 } 00:13:00.720 ], 00:13:00.720 "driver_specific": { 00:13:00.720 "passthru": { 00:13:00.720 "name": "pt1", 00:13:00.720 "base_bdev_name": "malloc1" 00:13:00.720 } 00:13:00.720 } 00:13:00.720 }' 00:13:00.720 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:00.720 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:00.720 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:00.720 23:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:00.977 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:00.977 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:00.977 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:00.977 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:00.977 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:00.977 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:01.236 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:01.236 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:01.236 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:01.236 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:01.236 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:01.494 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:01.494 "name": "pt2", 00:13:01.494 "aliases": [ 00:13:01.494 "8976fd46-429f-5187-8a6d-a1c5ffe26d43" 00:13:01.494 ], 00:13:01.494 "product_name": "passthru", 00:13:01.494 "block_size": 512, 00:13:01.494 "num_blocks": 65536, 00:13:01.494 "uuid": "8976fd46-429f-5187-8a6d-a1c5ffe26d43", 00:13:01.494 "assigned_rate_limits": { 00:13:01.494 "rw_ios_per_sec": 0, 00:13:01.494 "rw_mbytes_per_sec": 0, 00:13:01.494 "r_mbytes_per_sec": 0, 00:13:01.494 "w_mbytes_per_sec": 0 00:13:01.494 }, 00:13:01.494 "claimed": true, 00:13:01.494 "claim_type": "exclusive_write", 00:13:01.494 "zoned": false, 00:13:01.494 "supported_io_types": { 00:13:01.494 "read": true, 00:13:01.494 "write": true, 00:13:01.494 "unmap": true, 00:13:01.494 "write_zeroes": true, 00:13:01.494 "flush": true, 00:13:01.494 "reset": true, 00:13:01.494 "compare": false, 00:13:01.494 "compare_and_write": false, 00:13:01.494 "abort": true, 00:13:01.494 "nvme_admin": false, 00:13:01.494 "nvme_io": false 00:13:01.494 }, 00:13:01.494 "memory_domains": [ 00:13:01.494 { 00:13:01.494 "dma_device_id": "system", 00:13:01.494 "dma_device_type": 1 00:13:01.494 }, 00:13:01.494 { 00:13:01.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.494 "dma_device_type": 2 00:13:01.494 } 00:13:01.494 ], 00:13:01.494 "driver_specific": { 00:13:01.494 "passthru": { 00:13:01.494 "name": "pt2", 00:13:01.494 "base_bdev_name": "malloc2" 00:13:01.494 } 00:13:01.494 } 00:13:01.494 }' 00:13:01.494 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:01.494 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:01.494 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:01.494 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:01.494 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:01.753 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:01.753 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:01.753 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:01.753 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:01.753 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:01.753 23:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:01.753 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:01.753 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:01.753 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:02.012 [2024-05-14 23:28:25.272299] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.012 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b7f2dbc4-4695-403e-8770-60c329ca8e69 00:13:02.012 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b7f2dbc4-4695-403e-8770-60c329ca8e69 ']' 00:13:02.012 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:02.270 [2024-05-14 23:28:25.480188] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.270 [2024-05-14 23:28:25.480227] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.270 [2024-05-14 23:28:25.480309] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.270 [2024-05-14 23:28:25.480352] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.270 [2024-05-14 23:28:25.480363] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:13:02.270 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:02.270 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.528 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:02.528 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:02.528 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.528 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:02.787 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:02.787 23:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:03.047 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:03.047 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:03.305 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:03.564 [2024-05-14 23:28:26.600390] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:03.564 [2024-05-14 23:28:26.602088] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:03.564 [2024-05-14 23:28:26.602147] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:03.564 [2024-05-14 23:28:26.602233] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:03.564 [2024-05-14 23:28:26.602275] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.564 [2024-05-14 23:28:26.602289] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:13:03.564 request: 00:13:03.564 { 00:13:03.564 "name": "raid_bdev1", 00:13:03.564 "raid_level": "concat", 00:13:03.564 "base_bdevs": [ 00:13:03.564 "malloc1", 00:13:03.564 "malloc2" 00:13:03.564 ], 00:13:03.564 "superblock": false, 00:13:03.564 "strip_size_kb": 64, 00:13:03.564 "method": "bdev_raid_create", 00:13:03.564 "req_id": 1 00:13:03.564 } 00:13:03.564 Got JSON-RPC error response 00:13:03.564 response: 00:13:03.564 { 00:13:03.564 "code": -17, 00:13:03.564 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:03.564 } 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:03.564 23:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:03.822 [2024-05-14 23:28:27.052384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:03.822 [2024-05-14 23:28:27.052499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.822 [2024-05-14 23:28:27.052545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:13:03.822 [2024-05-14 23:28:27.052575] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.822 [2024-05-14 23:28:27.054320] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.822 [2024-05-14 23:28:27.054383] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:03.822 [2024-05-14 23:28:27.054487] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:03.822 [2024-05-14 23:28:27.054559] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:03.822 pt1 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.822 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.080 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:04.080 "name": "raid_bdev1", 00:13:04.080 "uuid": "b7f2dbc4-4695-403e-8770-60c329ca8e69", 00:13:04.080 "strip_size_kb": 64, 00:13:04.080 "state": "configuring", 00:13:04.080 "raid_level": "concat", 00:13:04.080 "superblock": true, 00:13:04.080 "num_base_bdevs": 2, 00:13:04.080 "num_base_bdevs_discovered": 1, 00:13:04.080 "num_base_bdevs_operational": 2, 00:13:04.080 "base_bdevs_list": [ 00:13:04.080 { 00:13:04.080 "name": "pt1", 00:13:04.080 "uuid": "e05e094c-2ece-5574-9e67-1e55dcfe4acb", 00:13:04.080 "is_configured": true, 00:13:04.080 "data_offset": 2048, 00:13:04.080 "data_size": 63488 00:13:04.080 }, 00:13:04.080 { 00:13:04.080 "name": null, 00:13:04.080 "uuid": "8976fd46-429f-5187-8a6d-a1c5ffe26d43", 00:13:04.080 "is_configured": false, 00:13:04.080 "data_offset": 2048, 00:13:04.080 "data_size": 63488 00:13:04.080 } 00:13:04.080 ] 00:13:04.080 }' 00:13:04.080 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:04.080 23:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.677 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:04.677 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:04.677 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.677 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.935 [2024-05-14 23:28:28.140616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.935 [2024-05-14 23:28:28.140757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.935 [2024-05-14 23:28:28.140818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:13:04.935 [2024-05-14 23:28:28.140850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.935 [2024-05-14 23:28:28.141531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.935 [2024-05-14 23:28:28.141594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.935 [2024-05-14 23:28:28.141689] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:04.935 [2024-05-14 23:28:28.141720] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.935 [2024-05-14 23:28:28.141810] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:13:04.935 [2024-05-14 23:28:28.141825] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:04.935 [2024-05-14 23:28:28.141916] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:13:04.935 [2024-05-14 23:28:28.142168] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:13:04.935 [2024-05-14 23:28:28.142186] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:13:04.935 [2024-05-14 23:28:28.142304] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.935 pt2 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.935 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.192 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:05.192 "name": "raid_bdev1", 00:13:05.192 "uuid": "b7f2dbc4-4695-403e-8770-60c329ca8e69", 00:13:05.192 "strip_size_kb": 64, 00:13:05.192 "state": "online", 00:13:05.192 "raid_level": "concat", 00:13:05.192 "superblock": true, 00:13:05.192 "num_base_bdevs": 2, 00:13:05.192 "num_base_bdevs_discovered": 2, 00:13:05.192 "num_base_bdevs_operational": 2, 00:13:05.192 "base_bdevs_list": [ 00:13:05.192 { 00:13:05.192 "name": "pt1", 00:13:05.192 "uuid": "e05e094c-2ece-5574-9e67-1e55dcfe4acb", 00:13:05.192 "is_configured": true, 00:13:05.192 "data_offset": 2048, 00:13:05.192 "data_size": 63488 00:13:05.192 }, 00:13:05.193 { 00:13:05.193 "name": "pt2", 00:13:05.193 "uuid": "8976fd46-429f-5187-8a6d-a1c5ffe26d43", 00:13:05.193 "is_configured": true, 00:13:05.193 "data_offset": 2048, 00:13:05.193 "data_size": 63488 00:13:05.193 } 00:13:05.193 ] 00:13:05.193 }' 00:13:05.193 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:05.193 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:06.130 [2024-05-14 23:28:29.307749] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:06.130 "name": "raid_bdev1", 00:13:06.130 "aliases": [ 00:13:06.130 "b7f2dbc4-4695-403e-8770-60c329ca8e69" 00:13:06.130 ], 00:13:06.130 "product_name": "Raid Volume", 00:13:06.130 "block_size": 512, 00:13:06.130 "num_blocks": 126976, 00:13:06.130 "uuid": "b7f2dbc4-4695-403e-8770-60c329ca8e69", 00:13:06.130 "assigned_rate_limits": { 00:13:06.130 "rw_ios_per_sec": 0, 00:13:06.130 "rw_mbytes_per_sec": 0, 00:13:06.130 "r_mbytes_per_sec": 0, 00:13:06.130 "w_mbytes_per_sec": 0 00:13:06.130 }, 00:13:06.130 "claimed": false, 00:13:06.130 "zoned": false, 00:13:06.130 "supported_io_types": { 00:13:06.130 "read": true, 00:13:06.130 "write": true, 00:13:06.130 "unmap": true, 00:13:06.130 "write_zeroes": true, 00:13:06.130 "flush": true, 00:13:06.130 "reset": true, 00:13:06.130 "compare": false, 00:13:06.130 "compare_and_write": false, 00:13:06.130 "abort": false, 00:13:06.130 "nvme_admin": false, 00:13:06.130 "nvme_io": false 00:13:06.130 }, 00:13:06.130 "memory_domains": [ 00:13:06.130 { 00:13:06.130 "dma_device_id": "system", 00:13:06.130 "dma_device_type": 1 00:13:06.130 }, 00:13:06.130 { 00:13:06.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.130 "dma_device_type": 2 00:13:06.130 }, 00:13:06.130 { 00:13:06.130 "dma_device_id": "system", 00:13:06.130 "dma_device_type": 1 00:13:06.130 }, 00:13:06.130 { 00:13:06.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.130 "dma_device_type": 2 00:13:06.130 } 00:13:06.130 ], 00:13:06.130 "driver_specific": { 00:13:06.130 "raid": { 00:13:06.130 "uuid": "b7f2dbc4-4695-403e-8770-60c329ca8e69", 00:13:06.130 "strip_size_kb": 64, 00:13:06.130 "state": "online", 00:13:06.130 "raid_level": "concat", 00:13:06.130 "superblock": true, 00:13:06.130 "num_base_bdevs": 2, 00:13:06.130 "num_base_bdevs_discovered": 2, 00:13:06.130 "num_base_bdevs_operational": 2, 00:13:06.130 "base_bdevs_list": [ 00:13:06.130 { 00:13:06.130 "name": "pt1", 00:13:06.130 "uuid": "e05e094c-2ece-5574-9e67-1e55dcfe4acb", 00:13:06.130 "is_configured": true, 00:13:06.130 "data_offset": 2048, 00:13:06.130 "data_size": 63488 00:13:06.130 }, 00:13:06.130 { 00:13:06.130 "name": "pt2", 00:13:06.130 "uuid": "8976fd46-429f-5187-8a6d-a1c5ffe26d43", 00:13:06.130 "is_configured": true, 00:13:06.130 "data_offset": 2048, 00:13:06.130 "data_size": 63488 00:13:06.130 } 00:13:06.130 ] 00:13:06.130 } 00:13:06.130 } 00:13:06.130 }' 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:06.130 pt2' 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:06.130 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:06.389 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:06.389 "name": "pt1", 00:13:06.389 "aliases": [ 00:13:06.389 "e05e094c-2ece-5574-9e67-1e55dcfe4acb" 00:13:06.389 ], 00:13:06.389 "product_name": "passthru", 00:13:06.389 "block_size": 512, 00:13:06.389 "num_blocks": 65536, 00:13:06.389 "uuid": "e05e094c-2ece-5574-9e67-1e55dcfe4acb", 00:13:06.389 "assigned_rate_limits": { 00:13:06.389 "rw_ios_per_sec": 0, 00:13:06.389 "rw_mbytes_per_sec": 0, 00:13:06.389 "r_mbytes_per_sec": 0, 00:13:06.389 "w_mbytes_per_sec": 0 00:13:06.389 }, 00:13:06.389 "claimed": true, 00:13:06.389 "claim_type": "exclusive_write", 00:13:06.389 "zoned": false, 00:13:06.389 "supported_io_types": { 00:13:06.389 "read": true, 00:13:06.389 "write": true, 00:13:06.389 "unmap": true, 00:13:06.389 "write_zeroes": true, 00:13:06.389 "flush": true, 00:13:06.389 "reset": true, 00:13:06.389 "compare": false, 00:13:06.389 "compare_and_write": false, 00:13:06.389 "abort": true, 00:13:06.389 "nvme_admin": false, 00:13:06.389 "nvme_io": false 00:13:06.389 }, 00:13:06.389 "memory_domains": [ 00:13:06.389 { 00:13:06.389 "dma_device_id": "system", 00:13:06.389 "dma_device_type": 1 00:13:06.389 }, 00:13:06.389 { 00:13:06.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.389 "dma_device_type": 2 00:13:06.389 } 00:13:06.389 ], 00:13:06.389 "driver_specific": { 00:13:06.389 "passthru": { 00:13:06.389 "name": "pt1", 00:13:06.389 "base_bdev_name": "malloc1" 00:13:06.389 } 00:13:06.389 } 00:13:06.389 }' 00:13:06.389 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:06.647 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:06.647 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:06.647 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:06.647 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:06.647 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.647 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:06.905 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:06.905 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.905 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:06.905 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:06.905 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:06.905 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:06.905 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:06.905 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:07.163 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:07.163 "name": "pt2", 00:13:07.163 "aliases": [ 00:13:07.163 "8976fd46-429f-5187-8a6d-a1c5ffe26d43" 00:13:07.163 ], 00:13:07.163 "product_name": "passthru", 00:13:07.163 "block_size": 512, 00:13:07.163 "num_blocks": 65536, 00:13:07.163 "uuid": "8976fd46-429f-5187-8a6d-a1c5ffe26d43", 00:13:07.163 "assigned_rate_limits": { 00:13:07.163 "rw_ios_per_sec": 0, 00:13:07.163 "rw_mbytes_per_sec": 0, 00:13:07.163 "r_mbytes_per_sec": 0, 00:13:07.163 "w_mbytes_per_sec": 0 00:13:07.163 }, 00:13:07.163 "claimed": true, 00:13:07.163 "claim_type": "exclusive_write", 00:13:07.163 "zoned": false, 00:13:07.163 "supported_io_types": { 00:13:07.163 "read": true, 00:13:07.163 "write": true, 00:13:07.163 "unmap": true, 00:13:07.163 "write_zeroes": true, 00:13:07.163 "flush": true, 00:13:07.163 "reset": true, 00:13:07.163 "compare": false, 00:13:07.163 "compare_and_write": false, 00:13:07.163 "abort": true, 00:13:07.163 "nvme_admin": false, 00:13:07.163 "nvme_io": false 00:13:07.163 }, 00:13:07.163 "memory_domains": [ 00:13:07.163 { 00:13:07.163 "dma_device_id": "system", 00:13:07.163 "dma_device_type": 1 00:13:07.163 }, 00:13:07.163 { 00:13:07.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.163 "dma_device_type": 2 00:13:07.163 } 00:13:07.163 ], 00:13:07.163 "driver_specific": { 00:13:07.163 "passthru": { 00:13:07.163 "name": "pt2", 00:13:07.163 "base_bdev_name": "malloc2" 00:13:07.163 } 00:13:07.163 } 00:13:07.163 }' 00:13:07.163 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:07.163 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:07.421 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:07.421 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:07.421 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:07.421 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:07.421 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:07.421 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:07.421 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:07.421 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:07.678 [2024-05-14 23:28:30.948133] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b7f2dbc4-4695-403e-8770-60c329ca8e69 '!=' b7f2dbc4-4695-403e-8770-60c329ca8e69 ']' 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 54778 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 54778 ']' 00:13:07.678 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 54778 00:13:07.937 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:13:07.937 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:07.937 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 54778 00:13:07.937 killing process with pid 54778 00:13:07.937 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:07.937 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:07.937 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54778' 00:13:07.937 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 54778 00:13:07.937 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 54778 00:13:07.937 [2024-05-14 23:28:30.986584] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.937 [2024-05-14 23:28:30.986680] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.937 [2024-05-14 23:28:30.986718] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.937 [2024-05-14 23:28:30.986729] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:13:07.937 [2024-05-14 23:28:31.155845] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.312 ************************************ 00:13:09.312 END TEST raid_superblock_test 00:13:09.312 ************************************ 00:13:09.312 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:13:09.312 00:13:09.312 real 0m11.952s 00:13:09.312 user 0m21.270s 00:13:09.312 sys 0m1.256s 00:13:09.312 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:09.312 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.312 23:28:32 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:13:09.312 23:28:32 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:09.312 23:28:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:09.312 23:28:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:09.312 23:28:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.312 ************************************ 00:13:09.312 START TEST raid_state_function_test 00:13:09.312 ************************************ 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:09.313 Process raid pid: 55160 00:13:09.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=55160 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55160' 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 55160 /var/tmp/spdk-raid.sock 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 55160 ']' 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:09.313 23:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.313 [2024-05-14 23:28:32.591829] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:13:09.313 [2024-05-14 23:28:32.592040] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.572 [2024-05-14 23:28:32.757024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.831 [2024-05-14 23:28:32.984151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.089 [2024-05-14 23:28:33.179941] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.354 23:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:10.354 23:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:13:10.354 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:10.618 [2024-05-14 23:28:33.647021] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.618 [2024-05-14 23:28:33.647120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.618 [2024-05-14 23:28:33.647152] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.618 [2024-05-14 23:28:33.647440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.618 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.876 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:10.876 "name": "Existed_Raid", 00:13:10.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.876 "strip_size_kb": 0, 00:13:10.876 "state": "configuring", 00:13:10.876 "raid_level": "raid1", 00:13:10.876 "superblock": false, 00:13:10.876 "num_base_bdevs": 2, 00:13:10.876 "num_base_bdevs_discovered": 0, 00:13:10.876 "num_base_bdevs_operational": 2, 00:13:10.876 "base_bdevs_list": [ 00:13:10.876 { 00:13:10.876 "name": "BaseBdev1", 00:13:10.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.876 "is_configured": false, 00:13:10.876 "data_offset": 0, 00:13:10.876 "data_size": 0 00:13:10.876 }, 00:13:10.876 { 00:13:10.876 "name": "BaseBdev2", 00:13:10.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.876 "is_configured": false, 00:13:10.876 "data_offset": 0, 00:13:10.876 "data_size": 0 00:13:10.876 } 00:13:10.876 ] 00:13:10.876 }' 00:13:10.876 23:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:10.876 23:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.442 23:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:11.700 [2024-05-14 23:28:34.819061] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.700 [2024-05-14 23:28:34.819102] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:13:11.700 23:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:11.959 [2024-05-14 23:28:35.011081] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:11.959 [2024-05-14 23:28:35.011464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:11.959 [2024-05-14 23:28:35.011494] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.959 [2024-05-14 23:28:35.011529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.959 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:12.216 BaseBdev1 00:13:12.216 [2024-05-14 23:28:35.291293] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.216 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:12.216 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:12.216 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:12.216 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:12.216 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:12.216 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:12.216 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:12.217 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:12.475 [ 00:13:12.475 { 00:13:12.475 "name": "BaseBdev1", 00:13:12.475 "aliases": [ 00:13:12.475 "1a3f1111-8421-41e0-a2af-8e9bb188df1e" 00:13:12.475 ], 00:13:12.475 "product_name": "Malloc disk", 00:13:12.475 "block_size": 512, 00:13:12.475 "num_blocks": 65536, 00:13:12.475 "uuid": "1a3f1111-8421-41e0-a2af-8e9bb188df1e", 00:13:12.475 "assigned_rate_limits": { 00:13:12.475 "rw_ios_per_sec": 0, 00:13:12.475 "rw_mbytes_per_sec": 0, 00:13:12.475 "r_mbytes_per_sec": 0, 00:13:12.475 "w_mbytes_per_sec": 0 00:13:12.475 }, 00:13:12.475 "claimed": true, 00:13:12.475 "claim_type": "exclusive_write", 00:13:12.475 "zoned": false, 00:13:12.475 "supported_io_types": { 00:13:12.475 "read": true, 00:13:12.475 "write": true, 00:13:12.475 "unmap": true, 00:13:12.475 "write_zeroes": true, 00:13:12.475 "flush": true, 00:13:12.475 "reset": true, 00:13:12.475 "compare": false, 00:13:12.475 "compare_and_write": false, 00:13:12.475 "abort": true, 00:13:12.475 "nvme_admin": false, 00:13:12.475 "nvme_io": false 00:13:12.475 }, 00:13:12.475 "memory_domains": [ 00:13:12.475 { 00:13:12.475 "dma_device_id": "system", 00:13:12.475 "dma_device_type": 1 00:13:12.475 }, 00:13:12.475 { 00:13:12.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.475 "dma_device_type": 2 00:13:12.475 } 00:13:12.475 ], 00:13:12.475 "driver_specific": {} 00:13:12.475 } 00:13:12.475 ] 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.475 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.734 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:12.734 "name": "Existed_Raid", 00:13:12.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.734 "strip_size_kb": 0, 00:13:12.734 "state": "configuring", 00:13:12.734 "raid_level": "raid1", 00:13:12.734 "superblock": false, 00:13:12.734 "num_base_bdevs": 2, 00:13:12.734 "num_base_bdevs_discovered": 1, 00:13:12.734 "num_base_bdevs_operational": 2, 00:13:12.734 "base_bdevs_list": [ 00:13:12.734 { 00:13:12.734 "name": "BaseBdev1", 00:13:12.734 "uuid": "1a3f1111-8421-41e0-a2af-8e9bb188df1e", 00:13:12.734 "is_configured": true, 00:13:12.734 "data_offset": 0, 00:13:12.734 "data_size": 65536 00:13:12.734 }, 00:13:12.734 { 00:13:12.734 "name": "BaseBdev2", 00:13:12.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.734 "is_configured": false, 00:13:12.734 "data_offset": 0, 00:13:12.734 "data_size": 0 00:13:12.734 } 00:13:12.734 ] 00:13:12.734 }' 00:13:12.734 23:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:12.734 23:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.301 23:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:13.559 [2024-05-14 23:28:36.779691] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.559 [2024-05-14 23:28:36.779776] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:13:13.559 23:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:13.818 [2024-05-14 23:28:36.987739] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.818 [2024-05-14 23:28:36.990423] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.818 [2024-05-14 23:28:36.990555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.818 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.076 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:14.076 "name": "Existed_Raid", 00:13:14.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.076 "strip_size_kb": 0, 00:13:14.076 "state": "configuring", 00:13:14.076 "raid_level": "raid1", 00:13:14.076 "superblock": false, 00:13:14.076 "num_base_bdevs": 2, 00:13:14.076 "num_base_bdevs_discovered": 1, 00:13:14.076 "num_base_bdevs_operational": 2, 00:13:14.076 "base_bdevs_list": [ 00:13:14.076 { 00:13:14.076 "name": "BaseBdev1", 00:13:14.076 "uuid": "1a3f1111-8421-41e0-a2af-8e9bb188df1e", 00:13:14.076 "is_configured": true, 00:13:14.076 "data_offset": 0, 00:13:14.076 "data_size": 65536 00:13:14.077 }, 00:13:14.077 { 00:13:14.077 "name": "BaseBdev2", 00:13:14.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.077 "is_configured": false, 00:13:14.077 "data_offset": 0, 00:13:14.077 "data_size": 0 00:13:14.077 } 00:13:14.077 ] 00:13:14.077 }' 00:13:14.077 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:14.077 23:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.643 23:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.940 [2024-05-14 23:28:38.079354] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.940 [2024-05-14 23:28:38.079403] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:13:14.940 [2024-05-14 23:28:38.079427] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:14.940 [2024-05-14 23:28:38.079526] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:14.940 [2024-05-14 23:28:38.079752] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:13:14.940 [2024-05-14 23:28:38.079766] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:13:14.940 [2024-05-14 23:28:38.079974] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.940 BaseBdev2 00:13:14.940 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:14.940 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:14.940 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:14.940 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:14.940 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:14.940 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:14.940 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.198 [ 00:13:15.198 { 00:13:15.198 "name": "BaseBdev2", 00:13:15.198 "aliases": [ 00:13:15.198 "456e7e99-ab13-4289-8a0f-39f35845b97b" 00:13:15.198 ], 00:13:15.198 "product_name": "Malloc disk", 00:13:15.198 "block_size": 512, 00:13:15.198 "num_blocks": 65536, 00:13:15.198 "uuid": "456e7e99-ab13-4289-8a0f-39f35845b97b", 00:13:15.198 "assigned_rate_limits": { 00:13:15.198 "rw_ios_per_sec": 0, 00:13:15.198 "rw_mbytes_per_sec": 0, 00:13:15.198 "r_mbytes_per_sec": 0, 00:13:15.198 "w_mbytes_per_sec": 0 00:13:15.198 }, 00:13:15.198 "claimed": true, 00:13:15.198 "claim_type": "exclusive_write", 00:13:15.198 "zoned": false, 00:13:15.198 "supported_io_types": { 00:13:15.198 "read": true, 00:13:15.198 "write": true, 00:13:15.198 "unmap": true, 00:13:15.198 "write_zeroes": true, 00:13:15.198 "flush": true, 00:13:15.198 "reset": true, 00:13:15.198 "compare": false, 00:13:15.198 "compare_and_write": false, 00:13:15.198 "abort": true, 00:13:15.198 "nvme_admin": false, 00:13:15.198 "nvme_io": false 00:13:15.198 }, 00:13:15.198 "memory_domains": [ 00:13:15.198 { 00:13:15.198 "dma_device_id": "system", 00:13:15.198 "dma_device_type": 1 00:13:15.198 }, 00:13:15.198 { 00:13:15.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.198 "dma_device_type": 2 00:13:15.198 } 00:13:15.198 ], 00:13:15.198 "driver_specific": {} 00:13:15.198 } 00:13:15.198 ] 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.198 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.457 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:15.457 "name": "Existed_Raid", 00:13:15.457 "uuid": "08f4394c-5fa4-40fc-959a-51732aed9633", 00:13:15.457 "strip_size_kb": 0, 00:13:15.457 "state": "online", 00:13:15.457 "raid_level": "raid1", 00:13:15.457 "superblock": false, 00:13:15.457 "num_base_bdevs": 2, 00:13:15.457 "num_base_bdevs_discovered": 2, 00:13:15.457 "num_base_bdevs_operational": 2, 00:13:15.457 "base_bdevs_list": [ 00:13:15.457 { 00:13:15.457 "name": "BaseBdev1", 00:13:15.457 "uuid": "1a3f1111-8421-41e0-a2af-8e9bb188df1e", 00:13:15.457 "is_configured": true, 00:13:15.457 "data_offset": 0, 00:13:15.457 "data_size": 65536 00:13:15.457 }, 00:13:15.457 { 00:13:15.457 "name": "BaseBdev2", 00:13:15.457 "uuid": "456e7e99-ab13-4289-8a0f-39f35845b97b", 00:13:15.457 "is_configured": true, 00:13:15.457 "data_offset": 0, 00:13:15.457 "data_size": 65536 00:13:15.457 } 00:13:15.457 ] 00:13:15.457 }' 00:13:15.457 23:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:15.457 23:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:16.393 [2024-05-14 23:28:39.643921] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:16.393 "name": "Existed_Raid", 00:13:16.393 "aliases": [ 00:13:16.393 "08f4394c-5fa4-40fc-959a-51732aed9633" 00:13:16.393 ], 00:13:16.393 "product_name": "Raid Volume", 00:13:16.393 "block_size": 512, 00:13:16.393 "num_blocks": 65536, 00:13:16.393 "uuid": "08f4394c-5fa4-40fc-959a-51732aed9633", 00:13:16.393 "assigned_rate_limits": { 00:13:16.393 "rw_ios_per_sec": 0, 00:13:16.393 "rw_mbytes_per_sec": 0, 00:13:16.393 "r_mbytes_per_sec": 0, 00:13:16.393 "w_mbytes_per_sec": 0 00:13:16.393 }, 00:13:16.393 "claimed": false, 00:13:16.393 "zoned": false, 00:13:16.393 "supported_io_types": { 00:13:16.393 "read": true, 00:13:16.393 "write": true, 00:13:16.393 "unmap": false, 00:13:16.393 "write_zeroes": true, 00:13:16.393 "flush": false, 00:13:16.393 "reset": true, 00:13:16.393 "compare": false, 00:13:16.393 "compare_and_write": false, 00:13:16.393 "abort": false, 00:13:16.393 "nvme_admin": false, 00:13:16.393 "nvme_io": false 00:13:16.393 }, 00:13:16.393 "memory_domains": [ 00:13:16.393 { 00:13:16.393 "dma_device_id": "system", 00:13:16.393 "dma_device_type": 1 00:13:16.393 }, 00:13:16.393 { 00:13:16.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.393 "dma_device_type": 2 00:13:16.393 }, 00:13:16.393 { 00:13:16.393 "dma_device_id": "system", 00:13:16.393 "dma_device_type": 1 00:13:16.393 }, 00:13:16.393 { 00:13:16.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.393 "dma_device_type": 2 00:13:16.393 } 00:13:16.393 ], 00:13:16.393 "driver_specific": { 00:13:16.393 "raid": { 00:13:16.393 "uuid": "08f4394c-5fa4-40fc-959a-51732aed9633", 00:13:16.393 "strip_size_kb": 0, 00:13:16.393 "state": "online", 00:13:16.393 "raid_level": "raid1", 00:13:16.393 "superblock": false, 00:13:16.393 "num_base_bdevs": 2, 00:13:16.393 "num_base_bdevs_discovered": 2, 00:13:16.393 "num_base_bdevs_operational": 2, 00:13:16.393 "base_bdevs_list": [ 00:13:16.393 { 00:13:16.393 "name": "BaseBdev1", 00:13:16.393 "uuid": "1a3f1111-8421-41e0-a2af-8e9bb188df1e", 00:13:16.393 "is_configured": true, 00:13:16.393 "data_offset": 0, 00:13:16.393 "data_size": 65536 00:13:16.393 }, 00:13:16.393 { 00:13:16.393 "name": "BaseBdev2", 00:13:16.393 "uuid": "456e7e99-ab13-4289-8a0f-39f35845b97b", 00:13:16.393 "is_configured": true, 00:13:16.393 "data_offset": 0, 00:13:16.393 "data_size": 65536 00:13:16.393 } 00:13:16.393 ] 00:13:16.393 } 00:13:16.393 } 00:13:16.393 }' 00:13:16.393 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.652 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:16.652 BaseBdev2' 00:13:16.652 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:16.652 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:16.652 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:16.652 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:16.652 "name": "BaseBdev1", 00:13:16.652 "aliases": [ 00:13:16.652 "1a3f1111-8421-41e0-a2af-8e9bb188df1e" 00:13:16.652 ], 00:13:16.652 "product_name": "Malloc disk", 00:13:16.652 "block_size": 512, 00:13:16.652 "num_blocks": 65536, 00:13:16.652 "uuid": "1a3f1111-8421-41e0-a2af-8e9bb188df1e", 00:13:16.652 "assigned_rate_limits": { 00:13:16.652 "rw_ios_per_sec": 0, 00:13:16.652 "rw_mbytes_per_sec": 0, 00:13:16.652 "r_mbytes_per_sec": 0, 00:13:16.652 "w_mbytes_per_sec": 0 00:13:16.652 }, 00:13:16.652 "claimed": true, 00:13:16.652 "claim_type": "exclusive_write", 00:13:16.652 "zoned": false, 00:13:16.652 "supported_io_types": { 00:13:16.652 "read": true, 00:13:16.652 "write": true, 00:13:16.652 "unmap": true, 00:13:16.652 "write_zeroes": true, 00:13:16.652 "flush": true, 00:13:16.652 "reset": true, 00:13:16.652 "compare": false, 00:13:16.652 "compare_and_write": false, 00:13:16.652 "abort": true, 00:13:16.652 "nvme_admin": false, 00:13:16.652 "nvme_io": false 00:13:16.652 }, 00:13:16.652 "memory_domains": [ 00:13:16.652 { 00:13:16.652 "dma_device_id": "system", 00:13:16.652 "dma_device_type": 1 00:13:16.652 }, 00:13:16.652 { 00:13:16.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.652 "dma_device_type": 2 00:13:16.652 } 00:13:16.652 ], 00:13:16.652 "driver_specific": {} 00:13:16.652 }' 00:13:16.652 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:16.910 23:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:16.910 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:16.910 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:16.910 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:16.910 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.910 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:17.168 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:17.168 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:17.168 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:17.168 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:17.168 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:17.168 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:17.168 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:17.168 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:17.426 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:17.426 "name": "BaseBdev2", 00:13:17.426 "aliases": [ 00:13:17.426 "456e7e99-ab13-4289-8a0f-39f35845b97b" 00:13:17.426 ], 00:13:17.426 "product_name": "Malloc disk", 00:13:17.426 "block_size": 512, 00:13:17.426 "num_blocks": 65536, 00:13:17.426 "uuid": "456e7e99-ab13-4289-8a0f-39f35845b97b", 00:13:17.426 "assigned_rate_limits": { 00:13:17.426 "rw_ios_per_sec": 0, 00:13:17.426 "rw_mbytes_per_sec": 0, 00:13:17.426 "r_mbytes_per_sec": 0, 00:13:17.426 "w_mbytes_per_sec": 0 00:13:17.426 }, 00:13:17.426 "claimed": true, 00:13:17.426 "claim_type": "exclusive_write", 00:13:17.426 "zoned": false, 00:13:17.426 "supported_io_types": { 00:13:17.426 "read": true, 00:13:17.426 "write": true, 00:13:17.426 "unmap": true, 00:13:17.426 "write_zeroes": true, 00:13:17.426 "flush": true, 00:13:17.426 "reset": true, 00:13:17.426 "compare": false, 00:13:17.426 "compare_and_write": false, 00:13:17.426 "abort": true, 00:13:17.426 "nvme_admin": false, 00:13:17.426 "nvme_io": false 00:13:17.426 }, 00:13:17.426 "memory_domains": [ 00:13:17.426 { 00:13:17.426 "dma_device_id": "system", 00:13:17.426 "dma_device_type": 1 00:13:17.426 }, 00:13:17.426 { 00:13:17.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.426 "dma_device_type": 2 00:13:17.426 } 00:13:17.426 ], 00:13:17.426 "driver_specific": {} 00:13:17.426 }' 00:13:17.426 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:17.426 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:17.684 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:17.684 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:17.684 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:17.684 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:17.684 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:17.684 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:17.684 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:17.684 23:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:17.955 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:17.955 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:17.955 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:18.230 [2024-05-14 23:28:41.248070] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.230 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.231 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.489 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:18.489 "name": "Existed_Raid", 00:13:18.489 "uuid": "08f4394c-5fa4-40fc-959a-51732aed9633", 00:13:18.489 "strip_size_kb": 0, 00:13:18.489 "state": "online", 00:13:18.489 "raid_level": "raid1", 00:13:18.489 "superblock": false, 00:13:18.489 "num_base_bdevs": 2, 00:13:18.489 "num_base_bdevs_discovered": 1, 00:13:18.489 "num_base_bdevs_operational": 1, 00:13:18.489 "base_bdevs_list": [ 00:13:18.489 { 00:13:18.489 "name": null, 00:13:18.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.489 "is_configured": false, 00:13:18.489 "data_offset": 0, 00:13:18.489 "data_size": 65536 00:13:18.489 }, 00:13:18.489 { 00:13:18.489 "name": "BaseBdev2", 00:13:18.489 "uuid": "456e7e99-ab13-4289-8a0f-39f35845b97b", 00:13:18.489 "is_configured": true, 00:13:18.489 "data_offset": 0, 00:13:18.489 "data_size": 65536 00:13:18.489 } 00:13:18.489 ] 00:13:18.489 }' 00:13:18.489 23:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:18.489 23:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.056 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:19.056 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.056 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.056 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:19.315 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:19.315 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:19.315 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:19.315 [2024-05-14 23:28:42.562864] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.315 [2024-05-14 23:28:42.562946] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.573 [2024-05-14 23:28:42.643131] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.573 [2024-05-14 23:28:42.643670] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.573 [2024-05-14 23:28:42.643693] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:13:19.573 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:19.573 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.573 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.573 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 55160 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 55160 ']' 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 55160 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55160 00:13:19.831 killing process with pid 55160 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55160' 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 55160 00:13:19.831 23:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 55160 00:13:19.831 [2024-05-14 23:28:42.927176] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.831 [2024-05-14 23:28:42.927289] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.204 ************************************ 00:13:21.204 END TEST raid_state_function_test 00:13:21.204 ************************************ 00:13:21.204 23:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:13:21.204 00:13:21.204 real 0m11.668s 00:13:21.204 user 0m20.707s 00:13:21.204 sys 0m1.287s 00:13:21.204 23:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:21.204 23:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.204 23:28:44 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:21.204 23:28:44 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:21.204 23:28:44 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.204 23:28:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.204 ************************************ 00:13:21.204 START TEST raid_state_function_test_sb 00:13:21.205 ************************************ 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:13:21.205 Process raid pid: 55537 00:13:21.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=55537 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55537' 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 55537 /var/tmp/spdk-raid.sock 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 55537 ']' 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:21.205 23:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.205 [2024-05-14 23:28:44.314222] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:13:21.205 [2024-05-14 23:28:44.314409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.205 [2024-05-14 23:28:44.476318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.463 [2024-05-14 23:28:44.718227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.743 [2024-05-14 23:28:44.929759] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.030 23:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:22.030 23:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:13:22.030 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:22.288 [2024-05-14 23:28:45.338434] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.288 [2024-05-14 23:28:45.338512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.288 [2024-05-14 23:28:45.338527] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.288 [2024-05-14 23:28:45.338547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.288 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.547 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:22.547 "name": "Existed_Raid", 00:13:22.547 "uuid": "b6d81824-f1ca-449c-bb32-0ac042e54c85", 00:13:22.547 "strip_size_kb": 0, 00:13:22.547 "state": "configuring", 00:13:22.547 "raid_level": "raid1", 00:13:22.547 "superblock": true, 00:13:22.547 "num_base_bdevs": 2, 00:13:22.547 "num_base_bdevs_discovered": 0, 00:13:22.547 "num_base_bdevs_operational": 2, 00:13:22.547 "base_bdevs_list": [ 00:13:22.547 { 00:13:22.547 "name": "BaseBdev1", 00:13:22.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.547 "is_configured": false, 00:13:22.547 "data_offset": 0, 00:13:22.547 "data_size": 0 00:13:22.547 }, 00:13:22.547 { 00:13:22.547 "name": "BaseBdev2", 00:13:22.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.547 "is_configured": false, 00:13:22.547 "data_offset": 0, 00:13:22.547 "data_size": 0 00:13:22.547 } 00:13:22.547 ] 00:13:22.547 }' 00:13:22.547 23:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:22.547 23:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.114 23:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:23.373 [2024-05-14 23:28:46.454496] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.373 [2024-05-14 23:28:46.454562] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:13:23.373 23:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:23.631 [2024-05-14 23:28:46.694529] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.631 [2024-05-14 23:28:46.694646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.631 [2024-05-14 23:28:46.694674] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.631 [2024-05-14 23:28:46.694707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.631 23:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:23.890 [2024-05-14 23:28:46.942563] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.890 BaseBdev1 00:13:23.890 23:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:23.890 23:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:23.890 23:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:23.890 23:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:23.890 23:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:23.890 23:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:23.890 23:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:23.890 23:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:24.149 [ 00:13:24.149 { 00:13:24.149 "name": "BaseBdev1", 00:13:24.149 "aliases": [ 00:13:24.149 "afbc3995-e0b3-47a8-91c1-b2a881967f4a" 00:13:24.149 ], 00:13:24.149 "product_name": "Malloc disk", 00:13:24.149 "block_size": 512, 00:13:24.149 "num_blocks": 65536, 00:13:24.149 "uuid": "afbc3995-e0b3-47a8-91c1-b2a881967f4a", 00:13:24.149 "assigned_rate_limits": { 00:13:24.149 "rw_ios_per_sec": 0, 00:13:24.149 "rw_mbytes_per_sec": 0, 00:13:24.149 "r_mbytes_per_sec": 0, 00:13:24.149 "w_mbytes_per_sec": 0 00:13:24.149 }, 00:13:24.149 "claimed": true, 00:13:24.149 "claim_type": "exclusive_write", 00:13:24.149 "zoned": false, 00:13:24.149 "supported_io_types": { 00:13:24.149 "read": true, 00:13:24.149 "write": true, 00:13:24.149 "unmap": true, 00:13:24.149 "write_zeroes": true, 00:13:24.149 "flush": true, 00:13:24.149 "reset": true, 00:13:24.149 "compare": false, 00:13:24.149 "compare_and_write": false, 00:13:24.149 "abort": true, 00:13:24.149 "nvme_admin": false, 00:13:24.149 "nvme_io": false 00:13:24.149 }, 00:13:24.149 "memory_domains": [ 00:13:24.149 { 00:13:24.149 "dma_device_id": "system", 00:13:24.149 "dma_device_type": 1 00:13:24.149 }, 00:13:24.149 { 00:13:24.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.149 "dma_device_type": 2 00:13:24.149 } 00:13:24.149 ], 00:13:24.149 "driver_specific": {} 00:13:24.149 } 00:13:24.149 ] 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.149 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.407 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:24.407 "name": "Existed_Raid", 00:13:24.407 "uuid": "65cee34a-01ef-4576-8a1c-6513a0b412cd", 00:13:24.407 "strip_size_kb": 0, 00:13:24.407 "state": "configuring", 00:13:24.407 "raid_level": "raid1", 00:13:24.407 "superblock": true, 00:13:24.407 "num_base_bdevs": 2, 00:13:24.407 "num_base_bdevs_discovered": 1, 00:13:24.407 "num_base_bdevs_operational": 2, 00:13:24.407 "base_bdevs_list": [ 00:13:24.407 { 00:13:24.407 "name": "BaseBdev1", 00:13:24.407 "uuid": "afbc3995-e0b3-47a8-91c1-b2a881967f4a", 00:13:24.407 "is_configured": true, 00:13:24.407 "data_offset": 2048, 00:13:24.407 "data_size": 63488 00:13:24.407 }, 00:13:24.407 { 00:13:24.407 "name": "BaseBdev2", 00:13:24.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.407 "is_configured": false, 00:13:24.407 "data_offset": 0, 00:13:24.407 "data_size": 0 00:13:24.407 } 00:13:24.407 ] 00:13:24.407 }' 00:13:24.407 23:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:24.407 23:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.343 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:25.343 [2024-05-14 23:28:48.510827] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.343 [2024-05-14 23:28:48.510879] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:13:25.343 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:25.602 [2024-05-14 23:28:48.774939] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.602 [2024-05-14 23:28:48.776625] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.602 [2024-05-14 23:28:48.776691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.602 23:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.861 23:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:25.861 "name": "Existed_Raid", 00:13:25.861 "uuid": "94b3c84e-376f-4948-a4f1-f636df4e4bec", 00:13:25.861 "strip_size_kb": 0, 00:13:25.861 "state": "configuring", 00:13:25.861 "raid_level": "raid1", 00:13:25.861 "superblock": true, 00:13:25.861 "num_base_bdevs": 2, 00:13:25.861 "num_base_bdevs_discovered": 1, 00:13:25.861 "num_base_bdevs_operational": 2, 00:13:25.861 "base_bdevs_list": [ 00:13:25.861 { 00:13:25.861 "name": "BaseBdev1", 00:13:25.861 "uuid": "afbc3995-e0b3-47a8-91c1-b2a881967f4a", 00:13:25.861 "is_configured": true, 00:13:25.861 "data_offset": 2048, 00:13:25.861 "data_size": 63488 00:13:25.861 }, 00:13:25.861 { 00:13:25.861 "name": "BaseBdev2", 00:13:25.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.861 "is_configured": false, 00:13:25.861 "data_offset": 0, 00:13:25.861 "data_size": 0 00:13:25.861 } 00:13:25.861 ] 00:13:25.861 }' 00:13:25.861 23:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:25.861 23:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.798 23:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.798 [2024-05-14 23:28:50.005164] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.798 BaseBdev2 00:13:26.798 [2024-05-14 23:28:50.005580] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:13:26.798 [2024-05-14 23:28:50.005614] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:26.798 [2024-05-14 23:28:50.005719] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:26.798 [2024-05-14 23:28:50.005968] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:13:26.798 [2024-05-14 23:28:50.005984] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:13:26.798 [2024-05-14 23:28:50.006108] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.798 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:26.798 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:26.798 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:26.798 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:13:26.798 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:26.798 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:26.798 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.057 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.316 [ 00:13:27.316 { 00:13:27.316 "name": "BaseBdev2", 00:13:27.316 "aliases": [ 00:13:27.316 "f4040315-448e-4f45-bf96-20437dc82a8e" 00:13:27.316 ], 00:13:27.316 "product_name": "Malloc disk", 00:13:27.316 "block_size": 512, 00:13:27.316 "num_blocks": 65536, 00:13:27.316 "uuid": "f4040315-448e-4f45-bf96-20437dc82a8e", 00:13:27.316 "assigned_rate_limits": { 00:13:27.316 "rw_ios_per_sec": 0, 00:13:27.316 "rw_mbytes_per_sec": 0, 00:13:27.316 "r_mbytes_per_sec": 0, 00:13:27.316 "w_mbytes_per_sec": 0 00:13:27.316 }, 00:13:27.316 "claimed": true, 00:13:27.316 "claim_type": "exclusive_write", 00:13:27.316 "zoned": false, 00:13:27.316 "supported_io_types": { 00:13:27.316 "read": true, 00:13:27.316 "write": true, 00:13:27.316 "unmap": true, 00:13:27.316 "write_zeroes": true, 00:13:27.316 "flush": true, 00:13:27.316 "reset": true, 00:13:27.316 "compare": false, 00:13:27.316 "compare_and_write": false, 00:13:27.316 "abort": true, 00:13:27.316 "nvme_admin": false, 00:13:27.316 "nvme_io": false 00:13:27.316 }, 00:13:27.316 "memory_domains": [ 00:13:27.316 { 00:13:27.316 "dma_device_id": "system", 00:13:27.316 "dma_device_type": 1 00:13:27.316 }, 00:13:27.316 { 00:13:27.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.316 "dma_device_type": 2 00:13:27.316 } 00:13:27.316 ], 00:13:27.316 "driver_specific": {} 00:13:27.316 } 00:13:27.316 ] 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.316 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.574 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:27.575 "name": "Existed_Raid", 00:13:27.575 "uuid": "94b3c84e-376f-4948-a4f1-f636df4e4bec", 00:13:27.575 "strip_size_kb": 0, 00:13:27.575 "state": "online", 00:13:27.575 "raid_level": "raid1", 00:13:27.575 "superblock": true, 00:13:27.575 "num_base_bdevs": 2, 00:13:27.575 "num_base_bdevs_discovered": 2, 00:13:27.575 "num_base_bdevs_operational": 2, 00:13:27.575 "base_bdevs_list": [ 00:13:27.575 { 00:13:27.575 "name": "BaseBdev1", 00:13:27.575 "uuid": "afbc3995-e0b3-47a8-91c1-b2a881967f4a", 00:13:27.575 "is_configured": true, 00:13:27.575 "data_offset": 2048, 00:13:27.575 "data_size": 63488 00:13:27.575 }, 00:13:27.575 { 00:13:27.575 "name": "BaseBdev2", 00:13:27.575 "uuid": "f4040315-448e-4f45-bf96-20437dc82a8e", 00:13:27.575 "is_configured": true, 00:13:27.575 "data_offset": 2048, 00:13:27.575 "data_size": 63488 00:13:27.575 } 00:13:27.575 ] 00:13:27.575 }' 00:13:27.575 23:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:27.575 23:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:28.510 [2024-05-14 23:28:51.761816] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:28.510 "name": "Existed_Raid", 00:13:28.510 "aliases": [ 00:13:28.510 "94b3c84e-376f-4948-a4f1-f636df4e4bec" 00:13:28.510 ], 00:13:28.510 "product_name": "Raid Volume", 00:13:28.510 "block_size": 512, 00:13:28.510 "num_blocks": 63488, 00:13:28.510 "uuid": "94b3c84e-376f-4948-a4f1-f636df4e4bec", 00:13:28.510 "assigned_rate_limits": { 00:13:28.510 "rw_ios_per_sec": 0, 00:13:28.510 "rw_mbytes_per_sec": 0, 00:13:28.510 "r_mbytes_per_sec": 0, 00:13:28.510 "w_mbytes_per_sec": 0 00:13:28.510 }, 00:13:28.510 "claimed": false, 00:13:28.510 "zoned": false, 00:13:28.510 "supported_io_types": { 00:13:28.510 "read": true, 00:13:28.510 "write": true, 00:13:28.510 "unmap": false, 00:13:28.510 "write_zeroes": true, 00:13:28.510 "flush": false, 00:13:28.510 "reset": true, 00:13:28.510 "compare": false, 00:13:28.510 "compare_and_write": false, 00:13:28.510 "abort": false, 00:13:28.510 "nvme_admin": false, 00:13:28.510 "nvme_io": false 00:13:28.510 }, 00:13:28.510 "memory_domains": [ 00:13:28.510 { 00:13:28.510 "dma_device_id": "system", 00:13:28.510 "dma_device_type": 1 00:13:28.510 }, 00:13:28.510 { 00:13:28.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.510 "dma_device_type": 2 00:13:28.510 }, 00:13:28.510 { 00:13:28.510 "dma_device_id": "system", 00:13:28.510 "dma_device_type": 1 00:13:28.510 }, 00:13:28.510 { 00:13:28.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.510 "dma_device_type": 2 00:13:28.510 } 00:13:28.510 ], 00:13:28.510 "driver_specific": { 00:13:28.510 "raid": { 00:13:28.510 "uuid": "94b3c84e-376f-4948-a4f1-f636df4e4bec", 00:13:28.510 "strip_size_kb": 0, 00:13:28.510 "state": "online", 00:13:28.510 "raid_level": "raid1", 00:13:28.510 "superblock": true, 00:13:28.510 "num_base_bdevs": 2, 00:13:28.510 "num_base_bdevs_discovered": 2, 00:13:28.510 "num_base_bdevs_operational": 2, 00:13:28.510 "base_bdevs_list": [ 00:13:28.510 { 00:13:28.510 "name": "BaseBdev1", 00:13:28.510 "uuid": "afbc3995-e0b3-47a8-91c1-b2a881967f4a", 00:13:28.510 "is_configured": true, 00:13:28.510 "data_offset": 2048, 00:13:28.510 "data_size": 63488 00:13:28.510 }, 00:13:28.510 { 00:13:28.510 "name": "BaseBdev2", 00:13:28.510 "uuid": "f4040315-448e-4f45-bf96-20437dc82a8e", 00:13:28.510 "is_configured": true, 00:13:28.510 "data_offset": 2048, 00:13:28.510 "data_size": 63488 00:13:28.510 } 00:13:28.510 ] 00:13:28.510 } 00:13:28.510 } 00:13:28.510 }' 00:13:28.510 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:28.769 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:28.769 BaseBdev2' 00:13:28.769 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:28.769 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:28.769 23:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:29.029 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:29.029 "name": "BaseBdev1", 00:13:29.029 "aliases": [ 00:13:29.029 "afbc3995-e0b3-47a8-91c1-b2a881967f4a" 00:13:29.029 ], 00:13:29.029 "product_name": "Malloc disk", 00:13:29.029 "block_size": 512, 00:13:29.029 "num_blocks": 65536, 00:13:29.029 "uuid": "afbc3995-e0b3-47a8-91c1-b2a881967f4a", 00:13:29.029 "assigned_rate_limits": { 00:13:29.029 "rw_ios_per_sec": 0, 00:13:29.029 "rw_mbytes_per_sec": 0, 00:13:29.029 "r_mbytes_per_sec": 0, 00:13:29.029 "w_mbytes_per_sec": 0 00:13:29.029 }, 00:13:29.029 "claimed": true, 00:13:29.029 "claim_type": "exclusive_write", 00:13:29.029 "zoned": false, 00:13:29.029 "supported_io_types": { 00:13:29.029 "read": true, 00:13:29.029 "write": true, 00:13:29.029 "unmap": true, 00:13:29.029 "write_zeroes": true, 00:13:29.029 "flush": true, 00:13:29.029 "reset": true, 00:13:29.029 "compare": false, 00:13:29.029 "compare_and_write": false, 00:13:29.029 "abort": true, 00:13:29.029 "nvme_admin": false, 00:13:29.029 "nvme_io": false 00:13:29.029 }, 00:13:29.029 "memory_domains": [ 00:13:29.029 { 00:13:29.029 "dma_device_id": "system", 00:13:29.029 "dma_device_type": 1 00:13:29.029 }, 00:13:29.029 { 00:13:29.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.029 "dma_device_type": 2 00:13:29.029 } 00:13:29.029 ], 00:13:29.029 "driver_specific": {} 00:13:29.029 }' 00:13:29.029 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:29.029 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:29.029 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:29.029 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:29.029 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:29.289 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:29.548 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:29.548 "name": "BaseBdev2", 00:13:29.548 "aliases": [ 00:13:29.548 "f4040315-448e-4f45-bf96-20437dc82a8e" 00:13:29.548 ], 00:13:29.548 "product_name": "Malloc disk", 00:13:29.548 "block_size": 512, 00:13:29.548 "num_blocks": 65536, 00:13:29.548 "uuid": "f4040315-448e-4f45-bf96-20437dc82a8e", 00:13:29.548 "assigned_rate_limits": { 00:13:29.548 "rw_ios_per_sec": 0, 00:13:29.548 "rw_mbytes_per_sec": 0, 00:13:29.548 "r_mbytes_per_sec": 0, 00:13:29.548 "w_mbytes_per_sec": 0 00:13:29.548 }, 00:13:29.548 "claimed": true, 00:13:29.548 "claim_type": "exclusive_write", 00:13:29.548 "zoned": false, 00:13:29.548 "supported_io_types": { 00:13:29.548 "read": true, 00:13:29.548 "write": true, 00:13:29.548 "unmap": true, 00:13:29.548 "write_zeroes": true, 00:13:29.548 "flush": true, 00:13:29.548 "reset": true, 00:13:29.548 "compare": false, 00:13:29.548 "compare_and_write": false, 00:13:29.548 "abort": true, 00:13:29.548 "nvme_admin": false, 00:13:29.548 "nvme_io": false 00:13:29.548 }, 00:13:29.548 "memory_domains": [ 00:13:29.548 { 00:13:29.548 "dma_device_id": "system", 00:13:29.548 "dma_device_type": 1 00:13:29.548 }, 00:13:29.548 { 00:13:29.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.548 "dma_device_type": 2 00:13:29.548 } 00:13:29.548 ], 00:13:29.548 "driver_specific": {} 00:13:29.548 }' 00:13:29.548 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:29.807 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:29.807 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:29.807 23:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:29.807 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:29.807 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:29.807 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:30.065 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:30.065 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:30.065 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:30.065 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:30.065 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:30.065 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:30.324 [2024-05-14 23:28:53.550013] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.582 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.839 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:30.839 "name": "Existed_Raid", 00:13:30.839 "uuid": "94b3c84e-376f-4948-a4f1-f636df4e4bec", 00:13:30.839 "strip_size_kb": 0, 00:13:30.839 "state": "online", 00:13:30.839 "raid_level": "raid1", 00:13:30.839 "superblock": true, 00:13:30.839 "num_base_bdevs": 2, 00:13:30.839 "num_base_bdevs_discovered": 1, 00:13:30.839 "num_base_bdevs_operational": 1, 00:13:30.839 "base_bdevs_list": [ 00:13:30.839 { 00:13:30.839 "name": null, 00:13:30.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.839 "is_configured": false, 00:13:30.839 "data_offset": 2048, 00:13:30.839 "data_size": 63488 00:13:30.839 }, 00:13:30.839 { 00:13:30.839 "name": "BaseBdev2", 00:13:30.839 "uuid": "f4040315-448e-4f45-bf96-20437dc82a8e", 00:13:30.839 "is_configured": true, 00:13:30.839 "data_offset": 2048, 00:13:30.839 "data_size": 63488 00:13:30.839 } 00:13:30.839 ] 00:13:30.839 }' 00:13:30.839 23:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:30.839 23:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.405 23:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:31.405 23:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.405 23:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.405 23:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:13:31.663 23:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:13:31.663 23:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:31.663 23:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:31.921 [2024-05-14 23:28:55.027676] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:31.921 [2024-05-14 23:28:55.027755] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.921 [2024-05-14 23:28:55.107777] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.921 [2024-05-14 23:28:55.107898] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.921 [2024-05-14 23:28:55.107929] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:13:31.921 23:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:31.921 23:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.921 23:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.921 23:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 55537 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 55537 ']' 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 55537 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55537 00:13:32.179 killing process with pid 55537 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55537' 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 55537 00:13:32.179 23:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 55537 00:13:32.179 [2024-05-14 23:28:55.352967] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.179 [2024-05-14 23:28:55.353077] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.554 ************************************ 00:13:33.554 END TEST raid_state_function_test_sb 00:13:33.554 ************************************ 00:13:33.554 23:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:13:33.554 00:13:33.554 real 0m12.452s 00:13:33.554 user 0m22.110s 00:13:33.554 sys 0m1.309s 00:13:33.554 23:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.554 23:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.554 23:28:56 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:33.554 23:28:56 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:33.554 23:28:56 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.554 23:28:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.554 ************************************ 00:13:33.554 START TEST raid_superblock_test 00:13:33.554 ************************************ 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:33.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=55938 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 55938 /var/tmp/spdk-raid.sock 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 55938 ']' 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.554 23:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.554 [2024-05-14 23:28:56.808758] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:13:33.554 [2024-05-14 23:28:56.808978] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55938 ] 00:13:33.812 [2024-05-14 23:28:56.970469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.070 [2024-05-14 23:28:57.211170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.329 [2024-05-14 23:28:57.419055] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.588 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:34.847 malloc1 00:13:34.847 23:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:35.106 [2024-05-14 23:28:58.148959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:35.106 [2024-05-14 23:28:58.149062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.106 [2024-05-14 23:28:58.149139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:13:35.106 [2024-05-14 23:28:58.149450] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.106 [2024-05-14 23:28:58.151336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.106 [2024-05-14 23:28:58.151385] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:35.106 pt1 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:35.106 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:35.364 malloc2 00:13:35.364 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:35.364 [2024-05-14 23:28:58.584175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:35.364 [2024-05-14 23:28:58.584443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.364 [2024-05-14 23:28:58.584526] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:13:35.364 [2024-05-14 23:28:58.584596] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.364 [2024-05-14 23:28:58.586302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.365 [2024-05-14 23:28:58.586355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:35.365 pt2 00:13:35.365 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:35.365 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.365 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:35.624 [2024-05-14 23:28:58.776308] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:35.624 [2024-05-14 23:28:58.778003] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.624 [2024-05-14 23:28:58.778171] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:13:35.624 [2024-05-14 23:28:58.778191] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.624 [2024-05-14 23:28:58.778350] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:35.624 [2024-05-14 23:28:58.778623] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:13:35.624 [2024-05-14 23:28:58.778641] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:13:35.624 [2024-05-14 23:28:58.778777] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.624 23:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.883 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:35.883 "name": "raid_bdev1", 00:13:35.883 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:35.883 "strip_size_kb": 0, 00:13:35.883 "state": "online", 00:13:35.883 "raid_level": "raid1", 00:13:35.883 "superblock": true, 00:13:35.883 "num_base_bdevs": 2, 00:13:35.883 "num_base_bdevs_discovered": 2, 00:13:35.883 "num_base_bdevs_operational": 2, 00:13:35.883 "base_bdevs_list": [ 00:13:35.883 { 00:13:35.883 "name": "pt1", 00:13:35.883 "uuid": "f78f9c59-8c78-5348-81bd-2c6132ebbd8e", 00:13:35.883 "is_configured": true, 00:13:35.883 "data_offset": 2048, 00:13:35.883 "data_size": 63488 00:13:35.883 }, 00:13:35.883 { 00:13:35.883 "name": "pt2", 00:13:35.883 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:35.883 "is_configured": true, 00:13:35.883 "data_offset": 2048, 00:13:35.883 "data_size": 63488 00:13:35.883 } 00:13:35.883 ] 00:13:35.883 }' 00:13:35.883 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:35.883 23:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.449 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:36.449 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:36.449 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:36.449 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:36.449 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:36.449 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:36.449 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:36.449 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:36.708 [2024-05-14 23:28:59.864580] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.708 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:36.708 "name": "raid_bdev1", 00:13:36.708 "aliases": [ 00:13:36.708 "89d84f3c-ab43-43e2-88b3-07c4f770124d" 00:13:36.708 ], 00:13:36.708 "product_name": "Raid Volume", 00:13:36.708 "block_size": 512, 00:13:36.708 "num_blocks": 63488, 00:13:36.708 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:36.708 "assigned_rate_limits": { 00:13:36.708 "rw_ios_per_sec": 0, 00:13:36.708 "rw_mbytes_per_sec": 0, 00:13:36.708 "r_mbytes_per_sec": 0, 00:13:36.708 "w_mbytes_per_sec": 0 00:13:36.708 }, 00:13:36.708 "claimed": false, 00:13:36.708 "zoned": false, 00:13:36.708 "supported_io_types": { 00:13:36.708 "read": true, 00:13:36.708 "write": true, 00:13:36.708 "unmap": false, 00:13:36.708 "write_zeroes": true, 00:13:36.708 "flush": false, 00:13:36.708 "reset": true, 00:13:36.708 "compare": false, 00:13:36.708 "compare_and_write": false, 00:13:36.708 "abort": false, 00:13:36.708 "nvme_admin": false, 00:13:36.708 "nvme_io": false 00:13:36.708 }, 00:13:36.708 "memory_domains": [ 00:13:36.708 { 00:13:36.708 "dma_device_id": "system", 00:13:36.708 "dma_device_type": 1 00:13:36.708 }, 00:13:36.708 { 00:13:36.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.708 "dma_device_type": 2 00:13:36.708 }, 00:13:36.708 { 00:13:36.708 "dma_device_id": "system", 00:13:36.708 "dma_device_type": 1 00:13:36.708 }, 00:13:36.708 { 00:13:36.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.708 "dma_device_type": 2 00:13:36.708 } 00:13:36.708 ], 00:13:36.708 "driver_specific": { 00:13:36.708 "raid": { 00:13:36.708 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:36.708 "strip_size_kb": 0, 00:13:36.708 "state": "online", 00:13:36.708 "raid_level": "raid1", 00:13:36.708 "superblock": true, 00:13:36.708 "num_base_bdevs": 2, 00:13:36.708 "num_base_bdevs_discovered": 2, 00:13:36.708 "num_base_bdevs_operational": 2, 00:13:36.708 "base_bdevs_list": [ 00:13:36.708 { 00:13:36.708 "name": "pt1", 00:13:36.708 "uuid": "f78f9c59-8c78-5348-81bd-2c6132ebbd8e", 00:13:36.708 "is_configured": true, 00:13:36.708 "data_offset": 2048, 00:13:36.708 "data_size": 63488 00:13:36.708 }, 00:13:36.709 { 00:13:36.709 "name": "pt2", 00:13:36.709 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:36.709 "is_configured": true, 00:13:36.709 "data_offset": 2048, 00:13:36.709 "data_size": 63488 00:13:36.709 } 00:13:36.709 ] 00:13:36.709 } 00:13:36.709 } 00:13:36.709 }' 00:13:36.709 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.709 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:36.709 pt2' 00:13:36.709 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:36.709 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:36.709 23:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:36.967 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:36.967 "name": "pt1", 00:13:36.967 "aliases": [ 00:13:36.967 "f78f9c59-8c78-5348-81bd-2c6132ebbd8e" 00:13:36.967 ], 00:13:36.967 "product_name": "passthru", 00:13:36.967 "block_size": 512, 00:13:36.967 "num_blocks": 65536, 00:13:36.967 "uuid": "f78f9c59-8c78-5348-81bd-2c6132ebbd8e", 00:13:36.967 "assigned_rate_limits": { 00:13:36.967 "rw_ios_per_sec": 0, 00:13:36.967 "rw_mbytes_per_sec": 0, 00:13:36.967 "r_mbytes_per_sec": 0, 00:13:36.967 "w_mbytes_per_sec": 0 00:13:36.967 }, 00:13:36.967 "claimed": true, 00:13:36.967 "claim_type": "exclusive_write", 00:13:36.967 "zoned": false, 00:13:36.967 "supported_io_types": { 00:13:36.967 "read": true, 00:13:36.967 "write": true, 00:13:36.967 "unmap": true, 00:13:36.967 "write_zeroes": true, 00:13:36.967 "flush": true, 00:13:36.967 "reset": true, 00:13:36.967 "compare": false, 00:13:36.967 "compare_and_write": false, 00:13:36.967 "abort": true, 00:13:36.967 "nvme_admin": false, 00:13:36.967 "nvme_io": false 00:13:36.967 }, 00:13:36.967 "memory_domains": [ 00:13:36.967 { 00:13:36.967 "dma_device_id": "system", 00:13:36.967 "dma_device_type": 1 00:13:36.967 }, 00:13:36.967 { 00:13:36.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.967 "dma_device_type": 2 00:13:36.967 } 00:13:36.967 ], 00:13:36.967 "driver_specific": { 00:13:36.967 "passthru": { 00:13:36.967 "name": "pt1", 00:13:36.967 "base_bdev_name": "malloc1" 00:13:36.967 } 00:13:36.967 } 00:13:36.967 }' 00:13:36.967 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:36.967 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:37.226 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:37.226 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:37.226 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:37.226 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.226 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:37.226 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:37.226 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.226 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:37.485 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:37.485 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:37.485 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:37.485 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:37.485 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:37.745 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:37.745 "name": "pt2", 00:13:37.745 "aliases": [ 00:13:37.745 "b76b4813-f897-50a8-aeab-075883c5d582" 00:13:37.745 ], 00:13:37.745 "product_name": "passthru", 00:13:37.745 "block_size": 512, 00:13:37.745 "num_blocks": 65536, 00:13:37.745 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:37.745 "assigned_rate_limits": { 00:13:37.745 "rw_ios_per_sec": 0, 00:13:37.745 "rw_mbytes_per_sec": 0, 00:13:37.745 "r_mbytes_per_sec": 0, 00:13:37.745 "w_mbytes_per_sec": 0 00:13:37.745 }, 00:13:37.745 "claimed": true, 00:13:37.745 "claim_type": "exclusive_write", 00:13:37.745 "zoned": false, 00:13:37.745 "supported_io_types": { 00:13:37.745 "read": true, 00:13:37.745 "write": true, 00:13:37.745 "unmap": true, 00:13:37.745 "write_zeroes": true, 00:13:37.745 "flush": true, 00:13:37.745 "reset": true, 00:13:37.745 "compare": false, 00:13:37.745 "compare_and_write": false, 00:13:37.745 "abort": true, 00:13:37.745 "nvme_admin": false, 00:13:37.745 "nvme_io": false 00:13:37.745 }, 00:13:37.745 "memory_domains": [ 00:13:37.745 { 00:13:37.745 "dma_device_id": "system", 00:13:37.745 "dma_device_type": 1 00:13:37.745 }, 00:13:37.745 { 00:13:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.745 "dma_device_type": 2 00:13:37.745 } 00:13:37.745 ], 00:13:37.745 "driver_specific": { 00:13:37.745 "passthru": { 00:13:37.745 "name": "pt2", 00:13:37.745 "base_bdev_name": "malloc2" 00:13:37.745 } 00:13:37.745 } 00:13:37.745 }' 00:13:37.745 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:37.745 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:37.745 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:37.745 23:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:38.013 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:38.013 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:38.013 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:38.013 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:38.013 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:38.013 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:38.013 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:38.271 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:38.271 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:38.271 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:38.271 [2024-05-14 23:29:01.544735] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.530 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=89d84f3c-ab43-43e2-88b3-07c4f770124d 00:13:38.530 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 89d84f3c-ab43-43e2-88b3-07c4f770124d ']' 00:13:38.530 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:38.530 [2024-05-14 23:29:01.784624] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.530 [2024-05-14 23:29:01.784664] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.530 [2024-05-14 23:29:01.784739] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.530 [2024-05-14 23:29:01.784797] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.530 [2024-05-14 23:29:01.784811] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:13:38.530 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:38.530 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.789 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:38.789 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:38.789 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.789 23:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:39.048 23:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.048 23:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:39.307 23:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:39.307 23:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:39.566 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:39.824 [2024-05-14 23:29:02.856789] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:39.824 [2024-05-14 23:29:02.858808] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:39.824 [2024-05-14 23:29:02.858911] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:39.824 [2024-05-14 23:29:02.859021] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:39.824 [2024-05-14 23:29:02.859088] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.824 [2024-05-14 23:29:02.859111] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:13:39.824 request: 00:13:39.824 { 00:13:39.824 "name": "raid_bdev1", 00:13:39.824 "raid_level": "raid1", 00:13:39.824 "base_bdevs": [ 00:13:39.824 "malloc1", 00:13:39.824 "malloc2" 00:13:39.824 ], 00:13:39.824 "superblock": false, 00:13:39.824 "method": "bdev_raid_create", 00:13:39.824 "req_id": 1 00:13:39.824 } 00:13:39.824 Got JSON-RPC error response 00:13:39.824 response: 00:13:39.824 { 00:13:39.824 "code": -17, 00:13:39.824 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:39.824 } 00:13:39.824 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:39.824 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:39.824 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:39.824 23:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:39.824 23:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:39.824 23:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.083 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:40.083 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:40.083 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:40.083 [2024-05-14 23:29:03.364811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:40.083 [2024-05-14 23:29:03.364971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.083 [2024-05-14 23:29:03.365016] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:13:40.083 [2024-05-14 23:29:03.365046] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.083 [2024-05-14 23:29:03.367029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.083 [2024-05-14 23:29:03.367186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:40.083 [2024-05-14 23:29:03.367285] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:40.083 [2024-05-14 23:29:03.367341] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:40.083 pt1 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:40.346 "name": "raid_bdev1", 00:13:40.346 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:40.346 "strip_size_kb": 0, 00:13:40.346 "state": "configuring", 00:13:40.346 "raid_level": "raid1", 00:13:40.346 "superblock": true, 00:13:40.346 "num_base_bdevs": 2, 00:13:40.346 "num_base_bdevs_discovered": 1, 00:13:40.346 "num_base_bdevs_operational": 2, 00:13:40.346 "base_bdevs_list": [ 00:13:40.346 { 00:13:40.346 "name": "pt1", 00:13:40.346 "uuid": "f78f9c59-8c78-5348-81bd-2c6132ebbd8e", 00:13:40.346 "is_configured": true, 00:13:40.346 "data_offset": 2048, 00:13:40.346 "data_size": 63488 00:13:40.346 }, 00:13:40.346 { 00:13:40.346 "name": null, 00:13:40.346 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:40.346 "is_configured": false, 00:13:40.346 "data_offset": 2048, 00:13:40.346 "data_size": 63488 00:13:40.346 } 00:13:40.346 ] 00:13:40.346 }' 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:40.346 23:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.282 [2024-05-14 23:29:04.529075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.282 [2024-05-14 23:29:04.529400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.282 [2024-05-14 23:29:04.529479] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:13:41.282 [2024-05-14 23:29:04.529522] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.282 [2024-05-14 23:29:04.529993] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.282 [2024-05-14 23:29:04.530044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.282 [2024-05-14 23:29:04.530173] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:41.282 [2024-05-14 23:29:04.530209] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.282 [2024-05-14 23:29:04.530324] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:13:41.282 [2024-05-14 23:29:04.530342] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.282 [2024-05-14 23:29:04.530457] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:13:41.282 [2024-05-14 23:29:04.530742] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:13:41.282 [2024-05-14 23:29:04.530764] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:13:41.282 [2024-05-14 23:29:04.530911] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.282 pt2 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.282 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.848 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:41.848 "name": "raid_bdev1", 00:13:41.848 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:41.848 "strip_size_kb": 0, 00:13:41.848 "state": "online", 00:13:41.848 "raid_level": "raid1", 00:13:41.848 "superblock": true, 00:13:41.848 "num_base_bdevs": 2, 00:13:41.848 "num_base_bdevs_discovered": 2, 00:13:41.848 "num_base_bdevs_operational": 2, 00:13:41.848 "base_bdevs_list": [ 00:13:41.848 { 00:13:41.848 "name": "pt1", 00:13:41.848 "uuid": "f78f9c59-8c78-5348-81bd-2c6132ebbd8e", 00:13:41.848 "is_configured": true, 00:13:41.848 "data_offset": 2048, 00:13:41.848 "data_size": 63488 00:13:41.848 }, 00:13:41.848 { 00:13:41.848 "name": "pt2", 00:13:41.848 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:41.848 "is_configured": true, 00:13:41.848 "data_offset": 2048, 00:13:41.848 "data_size": 63488 00:13:41.848 } 00:13:41.848 ] 00:13:41.848 }' 00:13:41.848 23:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:41.848 23:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:42.415 [2024-05-14 23:29:05.673394] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:42.415 "name": "raid_bdev1", 00:13:42.415 "aliases": [ 00:13:42.415 "89d84f3c-ab43-43e2-88b3-07c4f770124d" 00:13:42.415 ], 00:13:42.415 "product_name": "Raid Volume", 00:13:42.415 "block_size": 512, 00:13:42.415 "num_blocks": 63488, 00:13:42.415 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:42.415 "assigned_rate_limits": { 00:13:42.415 "rw_ios_per_sec": 0, 00:13:42.415 "rw_mbytes_per_sec": 0, 00:13:42.415 "r_mbytes_per_sec": 0, 00:13:42.415 "w_mbytes_per_sec": 0 00:13:42.415 }, 00:13:42.415 "claimed": false, 00:13:42.415 "zoned": false, 00:13:42.415 "supported_io_types": { 00:13:42.415 "read": true, 00:13:42.415 "write": true, 00:13:42.415 "unmap": false, 00:13:42.415 "write_zeroes": true, 00:13:42.415 "flush": false, 00:13:42.415 "reset": true, 00:13:42.415 "compare": false, 00:13:42.415 "compare_and_write": false, 00:13:42.415 "abort": false, 00:13:42.415 "nvme_admin": false, 00:13:42.415 "nvme_io": false 00:13:42.415 }, 00:13:42.415 "memory_domains": [ 00:13:42.415 { 00:13:42.415 "dma_device_id": "system", 00:13:42.415 "dma_device_type": 1 00:13:42.415 }, 00:13:42.415 { 00:13:42.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.415 "dma_device_type": 2 00:13:42.415 }, 00:13:42.415 { 00:13:42.415 "dma_device_id": "system", 00:13:42.415 "dma_device_type": 1 00:13:42.415 }, 00:13:42.415 { 00:13:42.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.415 "dma_device_type": 2 00:13:42.415 } 00:13:42.415 ], 00:13:42.415 "driver_specific": { 00:13:42.415 "raid": { 00:13:42.415 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:42.415 "strip_size_kb": 0, 00:13:42.415 "state": "online", 00:13:42.415 "raid_level": "raid1", 00:13:42.415 "superblock": true, 00:13:42.415 "num_base_bdevs": 2, 00:13:42.415 "num_base_bdevs_discovered": 2, 00:13:42.415 "num_base_bdevs_operational": 2, 00:13:42.415 "base_bdevs_list": [ 00:13:42.415 { 00:13:42.415 "name": "pt1", 00:13:42.415 "uuid": "f78f9c59-8c78-5348-81bd-2c6132ebbd8e", 00:13:42.415 "is_configured": true, 00:13:42.415 "data_offset": 2048, 00:13:42.415 "data_size": 63488 00:13:42.415 }, 00:13:42.415 { 00:13:42.415 "name": "pt2", 00:13:42.415 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:42.415 "is_configured": true, 00:13:42.415 "data_offset": 2048, 00:13:42.415 "data_size": 63488 00:13:42.415 } 00:13:42.415 ] 00:13:42.415 } 00:13:42.415 } 00:13:42.415 }' 00:13:42.415 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.673 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:13:42.673 pt2' 00:13:42.673 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:42.673 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:42.673 23:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:42.931 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:42.931 "name": "pt1", 00:13:42.931 "aliases": [ 00:13:42.931 "f78f9c59-8c78-5348-81bd-2c6132ebbd8e" 00:13:42.931 ], 00:13:42.931 "product_name": "passthru", 00:13:42.931 "block_size": 512, 00:13:42.931 "num_blocks": 65536, 00:13:42.931 "uuid": "f78f9c59-8c78-5348-81bd-2c6132ebbd8e", 00:13:42.931 "assigned_rate_limits": { 00:13:42.931 "rw_ios_per_sec": 0, 00:13:42.931 "rw_mbytes_per_sec": 0, 00:13:42.931 "r_mbytes_per_sec": 0, 00:13:42.931 "w_mbytes_per_sec": 0 00:13:42.931 }, 00:13:42.931 "claimed": true, 00:13:42.931 "claim_type": "exclusive_write", 00:13:42.931 "zoned": false, 00:13:42.931 "supported_io_types": { 00:13:42.931 "read": true, 00:13:42.931 "write": true, 00:13:42.931 "unmap": true, 00:13:42.931 "write_zeroes": true, 00:13:42.931 "flush": true, 00:13:42.931 "reset": true, 00:13:42.931 "compare": false, 00:13:42.931 "compare_and_write": false, 00:13:42.931 "abort": true, 00:13:42.931 "nvme_admin": false, 00:13:42.931 "nvme_io": false 00:13:42.931 }, 00:13:42.931 "memory_domains": [ 00:13:42.931 { 00:13:42.931 "dma_device_id": "system", 00:13:42.931 "dma_device_type": 1 00:13:42.931 }, 00:13:42.931 { 00:13:42.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.931 "dma_device_type": 2 00:13:42.931 } 00:13:42.931 ], 00:13:42.931 "driver_specific": { 00:13:42.931 "passthru": { 00:13:42.931 "name": "pt1", 00:13:42.931 "base_bdev_name": "malloc1" 00:13:42.931 } 00:13:42.931 } 00:13:42.931 }' 00:13:42.931 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:42.931 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:42.931 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:42.931 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:42.931 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:43.189 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:43.447 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:43.447 "name": "pt2", 00:13:43.447 "aliases": [ 00:13:43.447 "b76b4813-f897-50a8-aeab-075883c5d582" 00:13:43.447 ], 00:13:43.447 "product_name": "passthru", 00:13:43.447 "block_size": 512, 00:13:43.447 "num_blocks": 65536, 00:13:43.447 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:43.447 "assigned_rate_limits": { 00:13:43.447 "rw_ios_per_sec": 0, 00:13:43.447 "rw_mbytes_per_sec": 0, 00:13:43.447 "r_mbytes_per_sec": 0, 00:13:43.447 "w_mbytes_per_sec": 0 00:13:43.447 }, 00:13:43.447 "claimed": true, 00:13:43.447 "claim_type": "exclusive_write", 00:13:43.447 "zoned": false, 00:13:43.447 "supported_io_types": { 00:13:43.447 "read": true, 00:13:43.447 "write": true, 00:13:43.447 "unmap": true, 00:13:43.447 "write_zeroes": true, 00:13:43.447 "flush": true, 00:13:43.447 "reset": true, 00:13:43.447 "compare": false, 00:13:43.447 "compare_and_write": false, 00:13:43.447 "abort": true, 00:13:43.447 "nvme_admin": false, 00:13:43.447 "nvme_io": false 00:13:43.447 }, 00:13:43.447 "memory_domains": [ 00:13:43.447 { 00:13:43.447 "dma_device_id": "system", 00:13:43.447 "dma_device_type": 1 00:13:43.447 }, 00:13:43.447 { 00:13:43.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.447 "dma_device_type": 2 00:13:43.447 } 00:13:43.447 ], 00:13:43.447 "driver_specific": { 00:13:43.447 "passthru": { 00:13:43.447 "name": "pt2", 00:13:43.447 "base_bdev_name": "malloc2" 00:13:43.447 } 00:13:43.447 } 00:13:43.447 }' 00:13:43.447 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:43.706 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:43.706 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:43.706 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:43.706 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:43.706 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.706 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:43.706 23:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:43.964 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.964 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:43.964 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:43.964 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:43.964 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:43.964 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:44.221 [2024-05-14 23:29:07.309615] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.221 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 89d84f3c-ab43-43e2-88b3-07c4f770124d '!=' 89d84f3c-ab43-43e2-88b3-07c4f770124d ']' 00:13:44.221 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:44.221 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:13:44.221 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:13:44.222 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:44.222 [2024-05-14 23:29:07.497579] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:44.479 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:44.479 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:44.479 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:44.479 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:44.479 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:44.479 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:44.479 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:44.479 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:44.480 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:44.480 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:44.480 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.480 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.480 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:44.480 "name": "raid_bdev1", 00:13:44.480 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:44.480 "strip_size_kb": 0, 00:13:44.480 "state": "online", 00:13:44.480 "raid_level": "raid1", 00:13:44.480 "superblock": true, 00:13:44.480 "num_base_bdevs": 2, 00:13:44.480 "num_base_bdevs_discovered": 1, 00:13:44.480 "num_base_bdevs_operational": 1, 00:13:44.480 "base_bdevs_list": [ 00:13:44.480 { 00:13:44.480 "name": null, 00:13:44.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.480 "is_configured": false, 00:13:44.480 "data_offset": 2048, 00:13:44.480 "data_size": 63488 00:13:44.480 }, 00:13:44.480 { 00:13:44.480 "name": "pt2", 00:13:44.480 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:44.480 "is_configured": true, 00:13:44.480 "data_offset": 2048, 00:13:44.480 "data_size": 63488 00:13:44.480 } 00:13:44.480 ] 00:13:44.480 }' 00:13:44.480 23:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:44.480 23:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.415 23:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:45.415 [2024-05-14 23:29:08.665711] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.415 [2024-05-14 23:29:08.665752] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.415 [2024-05-14 23:29:08.665814] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.415 [2024-05-14 23:29:08.665850] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.415 [2024-05-14 23:29:08.665861] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:13:45.415 23:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.415 23:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:45.685 23:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:45.685 23:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:45.685 23:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:45.685 23:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.685 23:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:45.961 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:45.961 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.961 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:45.961 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:45.961 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:13:45.961 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:46.220 [2024-05-14 23:29:09.429798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:46.220 [2024-05-14 23:29:09.429935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.220 [2024-05-14 23:29:09.429986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:13:46.220 [2024-05-14 23:29:09.430018] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.220 [2024-05-14 23:29:09.431900] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.220 [2024-05-14 23:29:09.431953] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:46.220 [2024-05-14 23:29:09.432043] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:46.220 [2024-05-14 23:29:09.432088] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.220 [2024-05-14 23:29:09.432180] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:13:46.220 [2024-05-14 23:29:09.432194] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.220 [2024-05-14 23:29:09.432275] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:13:46.220 [2024-05-14 23:29:09.432472] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:13:46.220 [2024-05-14 23:29:09.432491] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:13:46.220 [2024-05-14 23:29:09.432588] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.220 pt2 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.220 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.479 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:46.479 "name": "raid_bdev1", 00:13:46.479 "uuid": "89d84f3c-ab43-43e2-88b3-07c4f770124d", 00:13:46.479 "strip_size_kb": 0, 00:13:46.479 "state": "online", 00:13:46.479 "raid_level": "raid1", 00:13:46.479 "superblock": true, 00:13:46.479 "num_base_bdevs": 2, 00:13:46.479 "num_base_bdevs_discovered": 1, 00:13:46.479 "num_base_bdevs_operational": 1, 00:13:46.479 "base_bdevs_list": [ 00:13:46.479 { 00:13:46.479 "name": null, 00:13:46.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.479 "is_configured": false, 00:13:46.479 "data_offset": 2048, 00:13:46.479 "data_size": 63488 00:13:46.479 }, 00:13:46.479 { 00:13:46.479 "name": "pt2", 00:13:46.479 "uuid": "b76b4813-f897-50a8-aeab-075883c5d582", 00:13:46.479 "is_configured": true, 00:13:46.479 "data_offset": 2048, 00:13:46.479 "data_size": 63488 00:13:46.479 } 00:13:46.479 ] 00:13:46.479 }' 00:13:46.479 23:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:46.479 23:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:13:47.416 [2024-05-14 23:29:10.654044] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 89d84f3c-ab43-43e2-88b3-07c4f770124d '!=' 89d84f3c-ab43-43e2-88b3-07c4f770124d ']' 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 55938 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 55938 ']' 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 55938 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55938 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:47.416 killing process with pid 55938 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55938' 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 55938 00:13:47.416 23:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 55938 00:13:47.416 [2024-05-14 23:29:10.693597] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.416 [2024-05-14 23:29:10.693673] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.416 [2024-05-14 23:29:10.693709] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.416 [2024-05-14 23:29:10.693719] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:13:47.675 [2024-05-14 23:29:10.857948] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.049 ************************************ 00:13:49.049 END TEST raid_superblock_test 00:13:49.049 ************************************ 00:13:49.049 23:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:13:49.049 00:13:49.049 real 0m15.411s 00:13:49.049 user 0m28.181s 00:13:49.049 sys 0m1.607s 00:13:49.049 23:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.049 23:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 23:29:12 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:13:49.049 23:29:12 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:13:49.049 23:29:12 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:13:49.049 23:29:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:49.049 23:29:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.049 23:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 ************************************ 00:13:49.049 START TEST raid_state_function_test 00:13:49.049 ************************************ 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:13:49.049 Process raid pid: 56418 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=56418 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 56418' 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 56418 /var/tmp/spdk-raid.sock 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 56418 ']' 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:49.049 23:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:49.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:49.050 23:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:49.050 23:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:49.050 23:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.050 [2024-05-14 23:29:12.291215] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:13:49.050 [2024-05-14 23:29:12.291414] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.307 [2024-05-14 23:29:12.454749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.565 [2024-05-14 23:29:12.706384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.822 [2024-05-14 23:29:12.909714] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:50.079 [2024-05-14 23:29:13.290792] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.079 [2024-05-14 23:29:13.290880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.079 [2024-05-14 23:29:13.290897] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.079 [2024-05-14 23:29:13.290918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.079 [2024-05-14 23:29:13.290928] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.079 [2024-05-14 23:29:13.290992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.079 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.337 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:50.337 "name": "Existed_Raid", 00:13:50.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.337 "strip_size_kb": 64, 00:13:50.337 "state": "configuring", 00:13:50.337 "raid_level": "raid0", 00:13:50.337 "superblock": false, 00:13:50.337 "num_base_bdevs": 3, 00:13:50.337 "num_base_bdevs_discovered": 0, 00:13:50.337 "num_base_bdevs_operational": 3, 00:13:50.337 "base_bdevs_list": [ 00:13:50.337 { 00:13:50.337 "name": "BaseBdev1", 00:13:50.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.337 "is_configured": false, 00:13:50.337 "data_offset": 0, 00:13:50.337 "data_size": 0 00:13:50.337 }, 00:13:50.337 { 00:13:50.337 "name": "BaseBdev2", 00:13:50.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.337 "is_configured": false, 00:13:50.337 "data_offset": 0, 00:13:50.337 "data_size": 0 00:13:50.337 }, 00:13:50.337 { 00:13:50.337 "name": "BaseBdev3", 00:13:50.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.337 "is_configured": false, 00:13:50.337 "data_offset": 0, 00:13:50.337 "data_size": 0 00:13:50.337 } 00:13:50.337 ] 00:13:50.337 }' 00:13:50.337 23:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:50.337 23:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.269 23:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:51.269 [2024-05-14 23:29:14.394832] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.269 [2024-05-14 23:29:14.394893] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:13:51.270 23:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:51.527 [2024-05-14 23:29:14.582880] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.527 [2024-05-14 23:29:14.582966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.527 [2024-05-14 23:29:14.582982] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.527 [2024-05-14 23:29:14.583012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.527 [2024-05-14 23:29:14.583022] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.527 [2024-05-14 23:29:14.583048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.527 23:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:51.527 [2024-05-14 23:29:14.810759] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.784 BaseBdev1 00:13:51.784 23:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:13:51.784 23:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:13:51.784 23:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:51.784 23:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:51.784 23:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:51.784 23:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:51.784 23:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.784 23:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:52.041 [ 00:13:52.041 { 00:13:52.041 "name": "BaseBdev1", 00:13:52.041 "aliases": [ 00:13:52.041 "d3a62851-f891-4f42-a304-4e26c316b042" 00:13:52.041 ], 00:13:52.041 "product_name": "Malloc disk", 00:13:52.041 "block_size": 512, 00:13:52.041 "num_blocks": 65536, 00:13:52.041 "uuid": "d3a62851-f891-4f42-a304-4e26c316b042", 00:13:52.041 "assigned_rate_limits": { 00:13:52.041 "rw_ios_per_sec": 0, 00:13:52.041 "rw_mbytes_per_sec": 0, 00:13:52.041 "r_mbytes_per_sec": 0, 00:13:52.041 "w_mbytes_per_sec": 0 00:13:52.041 }, 00:13:52.041 "claimed": true, 00:13:52.041 "claim_type": "exclusive_write", 00:13:52.041 "zoned": false, 00:13:52.041 "supported_io_types": { 00:13:52.041 "read": true, 00:13:52.041 "write": true, 00:13:52.041 "unmap": true, 00:13:52.041 "write_zeroes": true, 00:13:52.041 "flush": true, 00:13:52.041 "reset": true, 00:13:52.041 "compare": false, 00:13:52.041 "compare_and_write": false, 00:13:52.041 "abort": true, 00:13:52.041 "nvme_admin": false, 00:13:52.041 "nvme_io": false 00:13:52.041 }, 00:13:52.041 "memory_domains": [ 00:13:52.041 { 00:13:52.041 "dma_device_id": "system", 00:13:52.041 "dma_device_type": 1 00:13:52.041 }, 00:13:52.041 { 00:13:52.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.041 "dma_device_type": 2 00:13:52.041 } 00:13:52.041 ], 00:13:52.041 "driver_specific": {} 00:13:52.041 } 00:13:52.041 ] 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.041 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.300 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:52.300 "name": "Existed_Raid", 00:13:52.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.300 "strip_size_kb": 64, 00:13:52.300 "state": "configuring", 00:13:52.300 "raid_level": "raid0", 00:13:52.300 "superblock": false, 00:13:52.300 "num_base_bdevs": 3, 00:13:52.300 "num_base_bdevs_discovered": 1, 00:13:52.300 "num_base_bdevs_operational": 3, 00:13:52.300 "base_bdevs_list": [ 00:13:52.300 { 00:13:52.300 "name": "BaseBdev1", 00:13:52.300 "uuid": "d3a62851-f891-4f42-a304-4e26c316b042", 00:13:52.300 "is_configured": true, 00:13:52.300 "data_offset": 0, 00:13:52.300 "data_size": 65536 00:13:52.300 }, 00:13:52.300 { 00:13:52.300 "name": "BaseBdev2", 00:13:52.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.300 "is_configured": false, 00:13:52.300 "data_offset": 0, 00:13:52.300 "data_size": 0 00:13:52.300 }, 00:13:52.300 { 00:13:52.300 "name": "BaseBdev3", 00:13:52.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.300 "is_configured": false, 00:13:52.300 "data_offset": 0, 00:13:52.300 "data_size": 0 00:13:52.300 } 00:13:52.300 ] 00:13:52.300 }' 00:13:52.300 23:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:52.300 23:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.868 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:53.127 [2024-05-14 23:29:16.275072] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.127 [2024-05-14 23:29:16.275132] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:13:53.127 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:53.385 [2024-05-14 23:29:16.511162] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.385 [2024-05-14 23:29:16.512860] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.385 [2024-05-14 23:29:16.512931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.385 [2024-05-14 23:29:16.512962] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.385 [2024-05-14 23:29:16.512988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:53.385 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:53.386 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.386 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.643 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:53.643 "name": "Existed_Raid", 00:13:53.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.643 "strip_size_kb": 64, 00:13:53.643 "state": "configuring", 00:13:53.643 "raid_level": "raid0", 00:13:53.643 "superblock": false, 00:13:53.643 "num_base_bdevs": 3, 00:13:53.643 "num_base_bdevs_discovered": 1, 00:13:53.643 "num_base_bdevs_operational": 3, 00:13:53.643 "base_bdevs_list": [ 00:13:53.643 { 00:13:53.643 "name": "BaseBdev1", 00:13:53.643 "uuid": "d3a62851-f891-4f42-a304-4e26c316b042", 00:13:53.643 "is_configured": true, 00:13:53.643 "data_offset": 0, 00:13:53.643 "data_size": 65536 00:13:53.643 }, 00:13:53.643 { 00:13:53.643 "name": "BaseBdev2", 00:13:53.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.643 "is_configured": false, 00:13:53.643 "data_offset": 0, 00:13:53.643 "data_size": 0 00:13:53.643 }, 00:13:53.643 { 00:13:53.643 "name": "BaseBdev3", 00:13:53.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.643 "is_configured": false, 00:13:53.643 "data_offset": 0, 00:13:53.643 "data_size": 0 00:13:53.643 } 00:13:53.643 ] 00:13:53.643 }' 00:13:53.643 23:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:53.643 23:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.209 23:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.467 [2024-05-14 23:29:17.725028] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.467 BaseBdev2 00:13:54.467 23:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:13:54.467 23:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:13:54.467 23:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:54.467 23:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:54.467 23:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:54.467 23:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:54.467 23:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.725 23:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.983 [ 00:13:54.983 { 00:13:54.983 "name": "BaseBdev2", 00:13:54.983 "aliases": [ 00:13:54.983 "fc048fa0-a262-4ab0-8c3e-c904ced928e3" 00:13:54.983 ], 00:13:54.983 "product_name": "Malloc disk", 00:13:54.983 "block_size": 512, 00:13:54.983 "num_blocks": 65536, 00:13:54.983 "uuid": "fc048fa0-a262-4ab0-8c3e-c904ced928e3", 00:13:54.983 "assigned_rate_limits": { 00:13:54.983 "rw_ios_per_sec": 0, 00:13:54.983 "rw_mbytes_per_sec": 0, 00:13:54.983 "r_mbytes_per_sec": 0, 00:13:54.983 "w_mbytes_per_sec": 0 00:13:54.984 }, 00:13:54.984 "claimed": true, 00:13:54.984 "claim_type": "exclusive_write", 00:13:54.984 "zoned": false, 00:13:54.984 "supported_io_types": { 00:13:54.984 "read": true, 00:13:54.984 "write": true, 00:13:54.984 "unmap": true, 00:13:54.984 "write_zeroes": true, 00:13:54.984 "flush": true, 00:13:54.984 "reset": true, 00:13:54.984 "compare": false, 00:13:54.984 "compare_and_write": false, 00:13:54.984 "abort": true, 00:13:54.984 "nvme_admin": false, 00:13:54.984 "nvme_io": false 00:13:54.984 }, 00:13:54.984 "memory_domains": [ 00:13:54.984 { 00:13:54.984 "dma_device_id": "system", 00:13:54.984 "dma_device_type": 1 00:13:54.984 }, 00:13:54.984 { 00:13:54.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.984 "dma_device_type": 2 00:13:54.984 } 00:13:54.984 ], 00:13:54.984 "driver_specific": {} 00:13:54.984 } 00:13:54.984 ] 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.984 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.242 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:55.242 "name": "Existed_Raid", 00:13:55.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.242 "strip_size_kb": 64, 00:13:55.242 "state": "configuring", 00:13:55.242 "raid_level": "raid0", 00:13:55.242 "superblock": false, 00:13:55.242 "num_base_bdevs": 3, 00:13:55.242 "num_base_bdevs_discovered": 2, 00:13:55.242 "num_base_bdevs_operational": 3, 00:13:55.242 "base_bdevs_list": [ 00:13:55.242 { 00:13:55.242 "name": "BaseBdev1", 00:13:55.242 "uuid": "d3a62851-f891-4f42-a304-4e26c316b042", 00:13:55.242 "is_configured": true, 00:13:55.242 "data_offset": 0, 00:13:55.242 "data_size": 65536 00:13:55.242 }, 00:13:55.242 { 00:13:55.242 "name": "BaseBdev2", 00:13:55.242 "uuid": "fc048fa0-a262-4ab0-8c3e-c904ced928e3", 00:13:55.242 "is_configured": true, 00:13:55.242 "data_offset": 0, 00:13:55.242 "data_size": 65536 00:13:55.242 }, 00:13:55.242 { 00:13:55.242 "name": "BaseBdev3", 00:13:55.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.242 "is_configured": false, 00:13:55.242 "data_offset": 0, 00:13:55.242 "data_size": 0 00:13:55.242 } 00:13:55.242 ] 00:13:55.242 }' 00:13:55.242 23:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:55.242 23:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.177 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:56.177 [2024-05-14 23:29:19.343062] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.177 [2024-05-14 23:29:19.343117] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:13:56.177 [2024-05-14 23:29:19.343128] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:56.177 [2024-05-14 23:29:19.343512] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:56.177 [2024-05-14 23:29:19.343770] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:13:56.177 [2024-05-14 23:29:19.343786] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:13:56.177 [2024-05-14 23:29:19.343981] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.177 BaseBdev3 00:13:56.177 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:13:56.177 23:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:13:56.177 23:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:56.177 23:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:13:56.177 23:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:56.177 23:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:56.177 23:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.434 23:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:56.691 [ 00:13:56.691 { 00:13:56.691 "name": "BaseBdev3", 00:13:56.691 "aliases": [ 00:13:56.691 "d321e6ed-b843-43eb-abc1-9195d08ef4b0" 00:13:56.691 ], 00:13:56.691 "product_name": "Malloc disk", 00:13:56.691 "block_size": 512, 00:13:56.691 "num_blocks": 65536, 00:13:56.691 "uuid": "d321e6ed-b843-43eb-abc1-9195d08ef4b0", 00:13:56.691 "assigned_rate_limits": { 00:13:56.691 "rw_ios_per_sec": 0, 00:13:56.691 "rw_mbytes_per_sec": 0, 00:13:56.691 "r_mbytes_per_sec": 0, 00:13:56.691 "w_mbytes_per_sec": 0 00:13:56.691 }, 00:13:56.691 "claimed": true, 00:13:56.691 "claim_type": "exclusive_write", 00:13:56.691 "zoned": false, 00:13:56.691 "supported_io_types": { 00:13:56.691 "read": true, 00:13:56.691 "write": true, 00:13:56.691 "unmap": true, 00:13:56.691 "write_zeroes": true, 00:13:56.691 "flush": true, 00:13:56.691 "reset": true, 00:13:56.691 "compare": false, 00:13:56.691 "compare_and_write": false, 00:13:56.691 "abort": true, 00:13:56.691 "nvme_admin": false, 00:13:56.691 "nvme_io": false 00:13:56.691 }, 00:13:56.691 "memory_domains": [ 00:13:56.691 { 00:13:56.691 "dma_device_id": "system", 00:13:56.691 "dma_device_type": 1 00:13:56.691 }, 00:13:56.691 { 00:13:56.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.691 "dma_device_type": 2 00:13:56.691 } 00:13:56.691 ], 00:13:56.691 "driver_specific": {} 00:13:56.691 } 00:13:56.691 ] 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:56.691 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:56.692 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.692 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.692 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.692 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.692 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.692 23:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.949 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.949 "name": "Existed_Raid", 00:13:56.949 "uuid": "2831d6d2-6de5-4e78-a954-e4acdf274169", 00:13:56.949 "strip_size_kb": 64, 00:13:56.949 "state": "online", 00:13:56.949 "raid_level": "raid0", 00:13:56.949 "superblock": false, 00:13:56.949 "num_base_bdevs": 3, 00:13:56.949 "num_base_bdevs_discovered": 3, 00:13:56.949 "num_base_bdevs_operational": 3, 00:13:56.949 "base_bdevs_list": [ 00:13:56.949 { 00:13:56.949 "name": "BaseBdev1", 00:13:56.949 "uuid": "d3a62851-f891-4f42-a304-4e26c316b042", 00:13:56.949 "is_configured": true, 00:13:56.949 "data_offset": 0, 00:13:56.949 "data_size": 65536 00:13:56.949 }, 00:13:56.949 { 00:13:56.949 "name": "BaseBdev2", 00:13:56.949 "uuid": "fc048fa0-a262-4ab0-8c3e-c904ced928e3", 00:13:56.949 "is_configured": true, 00:13:56.949 "data_offset": 0, 00:13:56.949 "data_size": 65536 00:13:56.949 }, 00:13:56.949 { 00:13:56.949 "name": "BaseBdev3", 00:13:56.949 "uuid": "d321e6ed-b843-43eb-abc1-9195d08ef4b0", 00:13:56.949 "is_configured": true, 00:13:56.949 "data_offset": 0, 00:13:56.949 "data_size": 65536 00:13:56.949 } 00:13:56.949 ] 00:13:56.949 }' 00:13:56.949 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.949 23:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.515 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.515 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:13:57.515 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:13:57.515 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:13:57.515 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:13:57.515 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:13:57.515 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:13:57.515 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:57.773 [2024-05-14 23:29:20.903556] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.773 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:13:57.773 "name": "Existed_Raid", 00:13:57.773 "aliases": [ 00:13:57.773 "2831d6d2-6de5-4e78-a954-e4acdf274169" 00:13:57.773 ], 00:13:57.773 "product_name": "Raid Volume", 00:13:57.773 "block_size": 512, 00:13:57.773 "num_blocks": 196608, 00:13:57.773 "uuid": "2831d6d2-6de5-4e78-a954-e4acdf274169", 00:13:57.773 "assigned_rate_limits": { 00:13:57.773 "rw_ios_per_sec": 0, 00:13:57.773 "rw_mbytes_per_sec": 0, 00:13:57.773 "r_mbytes_per_sec": 0, 00:13:57.773 "w_mbytes_per_sec": 0 00:13:57.773 }, 00:13:57.773 "claimed": false, 00:13:57.773 "zoned": false, 00:13:57.773 "supported_io_types": { 00:13:57.773 "read": true, 00:13:57.773 "write": true, 00:13:57.773 "unmap": true, 00:13:57.773 "write_zeroes": true, 00:13:57.773 "flush": true, 00:13:57.773 "reset": true, 00:13:57.773 "compare": false, 00:13:57.773 "compare_and_write": false, 00:13:57.773 "abort": false, 00:13:57.773 "nvme_admin": false, 00:13:57.773 "nvme_io": false 00:13:57.773 }, 00:13:57.773 "memory_domains": [ 00:13:57.773 { 00:13:57.773 "dma_device_id": "system", 00:13:57.773 "dma_device_type": 1 00:13:57.774 }, 00:13:57.774 { 00:13:57.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.774 "dma_device_type": 2 00:13:57.774 }, 00:13:57.774 { 00:13:57.774 "dma_device_id": "system", 00:13:57.774 "dma_device_type": 1 00:13:57.774 }, 00:13:57.774 { 00:13:57.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.774 "dma_device_type": 2 00:13:57.774 }, 00:13:57.774 { 00:13:57.774 "dma_device_id": "system", 00:13:57.774 "dma_device_type": 1 00:13:57.774 }, 00:13:57.774 { 00:13:57.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.774 "dma_device_type": 2 00:13:57.774 } 00:13:57.774 ], 00:13:57.774 "driver_specific": { 00:13:57.774 "raid": { 00:13:57.774 "uuid": "2831d6d2-6de5-4e78-a954-e4acdf274169", 00:13:57.774 "strip_size_kb": 64, 00:13:57.774 "state": "online", 00:13:57.774 "raid_level": "raid0", 00:13:57.774 "superblock": false, 00:13:57.774 "num_base_bdevs": 3, 00:13:57.774 "num_base_bdevs_discovered": 3, 00:13:57.774 "num_base_bdevs_operational": 3, 00:13:57.774 "base_bdevs_list": [ 00:13:57.774 { 00:13:57.774 "name": "BaseBdev1", 00:13:57.774 "uuid": "d3a62851-f891-4f42-a304-4e26c316b042", 00:13:57.774 "is_configured": true, 00:13:57.774 "data_offset": 0, 00:13:57.774 "data_size": 65536 00:13:57.774 }, 00:13:57.774 { 00:13:57.774 "name": "BaseBdev2", 00:13:57.774 "uuid": "fc048fa0-a262-4ab0-8c3e-c904ced928e3", 00:13:57.774 "is_configured": true, 00:13:57.774 "data_offset": 0, 00:13:57.774 "data_size": 65536 00:13:57.774 }, 00:13:57.774 { 00:13:57.774 "name": "BaseBdev3", 00:13:57.774 "uuid": "d321e6ed-b843-43eb-abc1-9195d08ef4b0", 00:13:57.774 "is_configured": true, 00:13:57.774 "data_offset": 0, 00:13:57.774 "data_size": 65536 00:13:57.774 } 00:13:57.774 ] 00:13:57.774 } 00:13:57.774 } 00:13:57.774 }' 00:13:57.774 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.774 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:13:57.774 BaseBdev2 00:13:57.774 BaseBdev3' 00:13:57.774 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:57.774 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:57.774 23:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:58.032 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:58.032 "name": "BaseBdev1", 00:13:58.032 "aliases": [ 00:13:58.032 "d3a62851-f891-4f42-a304-4e26c316b042" 00:13:58.032 ], 00:13:58.032 "product_name": "Malloc disk", 00:13:58.032 "block_size": 512, 00:13:58.032 "num_blocks": 65536, 00:13:58.032 "uuid": "d3a62851-f891-4f42-a304-4e26c316b042", 00:13:58.032 "assigned_rate_limits": { 00:13:58.032 "rw_ios_per_sec": 0, 00:13:58.032 "rw_mbytes_per_sec": 0, 00:13:58.032 "r_mbytes_per_sec": 0, 00:13:58.032 "w_mbytes_per_sec": 0 00:13:58.032 }, 00:13:58.032 "claimed": true, 00:13:58.032 "claim_type": "exclusive_write", 00:13:58.032 "zoned": false, 00:13:58.032 "supported_io_types": { 00:13:58.032 "read": true, 00:13:58.032 "write": true, 00:13:58.032 "unmap": true, 00:13:58.032 "write_zeroes": true, 00:13:58.032 "flush": true, 00:13:58.032 "reset": true, 00:13:58.032 "compare": false, 00:13:58.032 "compare_and_write": false, 00:13:58.032 "abort": true, 00:13:58.032 "nvme_admin": false, 00:13:58.032 "nvme_io": false 00:13:58.032 }, 00:13:58.032 "memory_domains": [ 00:13:58.032 { 00:13:58.032 "dma_device_id": "system", 00:13:58.032 "dma_device_type": 1 00:13:58.032 }, 00:13:58.032 { 00:13:58.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.032 "dma_device_type": 2 00:13:58.032 } 00:13:58.032 ], 00:13:58.032 "driver_specific": {} 00:13:58.032 }' 00:13:58.032 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:58.032 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:58.032 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:58.032 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:58.291 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:58.291 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:58.291 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:58.291 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:58.291 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:58.291 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:58.550 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:58.550 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:58.550 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:58.550 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:58.550 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:58.809 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:58.809 "name": "BaseBdev2", 00:13:58.809 "aliases": [ 00:13:58.809 "fc048fa0-a262-4ab0-8c3e-c904ced928e3" 00:13:58.809 ], 00:13:58.809 "product_name": "Malloc disk", 00:13:58.809 "block_size": 512, 00:13:58.809 "num_blocks": 65536, 00:13:58.809 "uuid": "fc048fa0-a262-4ab0-8c3e-c904ced928e3", 00:13:58.809 "assigned_rate_limits": { 00:13:58.809 "rw_ios_per_sec": 0, 00:13:58.809 "rw_mbytes_per_sec": 0, 00:13:58.809 "r_mbytes_per_sec": 0, 00:13:58.809 "w_mbytes_per_sec": 0 00:13:58.809 }, 00:13:58.809 "claimed": true, 00:13:58.809 "claim_type": "exclusive_write", 00:13:58.809 "zoned": false, 00:13:58.809 "supported_io_types": { 00:13:58.809 "read": true, 00:13:58.809 "write": true, 00:13:58.809 "unmap": true, 00:13:58.809 "write_zeroes": true, 00:13:58.809 "flush": true, 00:13:58.809 "reset": true, 00:13:58.809 "compare": false, 00:13:58.809 "compare_and_write": false, 00:13:58.809 "abort": true, 00:13:58.809 "nvme_admin": false, 00:13:58.809 "nvme_io": false 00:13:58.809 }, 00:13:58.809 "memory_domains": [ 00:13:58.809 { 00:13:58.809 "dma_device_id": "system", 00:13:58.809 "dma_device_type": 1 00:13:58.809 }, 00:13:58.809 { 00:13:58.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.809 "dma_device_type": 2 00:13:58.809 } 00:13:58.809 ], 00:13:58.809 "driver_specific": {} 00:13:58.809 }' 00:13:58.809 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:58.809 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:58.809 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:58.809 23:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:58.809 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:59.067 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:13:59.325 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:13:59.325 "name": "BaseBdev3", 00:13:59.325 "aliases": [ 00:13:59.325 "d321e6ed-b843-43eb-abc1-9195d08ef4b0" 00:13:59.325 ], 00:13:59.325 "product_name": "Malloc disk", 00:13:59.325 "block_size": 512, 00:13:59.325 "num_blocks": 65536, 00:13:59.325 "uuid": "d321e6ed-b843-43eb-abc1-9195d08ef4b0", 00:13:59.325 "assigned_rate_limits": { 00:13:59.325 "rw_ios_per_sec": 0, 00:13:59.325 "rw_mbytes_per_sec": 0, 00:13:59.325 "r_mbytes_per_sec": 0, 00:13:59.325 "w_mbytes_per_sec": 0 00:13:59.325 }, 00:13:59.325 "claimed": true, 00:13:59.325 "claim_type": "exclusive_write", 00:13:59.325 "zoned": false, 00:13:59.325 "supported_io_types": { 00:13:59.325 "read": true, 00:13:59.325 "write": true, 00:13:59.325 "unmap": true, 00:13:59.325 "write_zeroes": true, 00:13:59.325 "flush": true, 00:13:59.325 "reset": true, 00:13:59.325 "compare": false, 00:13:59.325 "compare_and_write": false, 00:13:59.325 "abort": true, 00:13:59.325 "nvme_admin": false, 00:13:59.325 "nvme_io": false 00:13:59.325 }, 00:13:59.325 "memory_domains": [ 00:13:59.325 { 00:13:59.325 "dma_device_id": "system", 00:13:59.325 "dma_device_type": 1 00:13:59.325 }, 00:13:59.325 { 00:13:59.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.325 "dma_device_type": 2 00:13:59.325 } 00:13:59.325 ], 00:13:59.325 "driver_specific": {} 00:13:59.325 }' 00:13:59.325 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:59.582 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:13:59.583 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:13:59.583 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:59.583 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:13:59.583 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:59.583 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:59.583 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:13:59.840 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:59.840 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:59.840 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:13:59.840 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:13:59.840 23:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:00.098 [2024-05-14 23:29:23.143717] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.098 [2024-05-14 23:29:23.143753] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.098 [2024-05-14 23:29:23.143796] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.098 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.356 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:00.356 "name": "Existed_Raid", 00:14:00.356 "uuid": "2831d6d2-6de5-4e78-a954-e4acdf274169", 00:14:00.356 "strip_size_kb": 64, 00:14:00.356 "state": "offline", 00:14:00.356 "raid_level": "raid0", 00:14:00.356 "superblock": false, 00:14:00.356 "num_base_bdevs": 3, 00:14:00.356 "num_base_bdevs_discovered": 2, 00:14:00.356 "num_base_bdevs_operational": 2, 00:14:00.356 "base_bdevs_list": [ 00:14:00.356 { 00:14:00.356 "name": null, 00:14:00.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.356 "is_configured": false, 00:14:00.356 "data_offset": 0, 00:14:00.356 "data_size": 65536 00:14:00.356 }, 00:14:00.356 { 00:14:00.356 "name": "BaseBdev2", 00:14:00.356 "uuid": "fc048fa0-a262-4ab0-8c3e-c904ced928e3", 00:14:00.356 "is_configured": true, 00:14:00.356 "data_offset": 0, 00:14:00.356 "data_size": 65536 00:14:00.356 }, 00:14:00.356 { 00:14:00.356 "name": "BaseBdev3", 00:14:00.356 "uuid": "d321e6ed-b843-43eb-abc1-9195d08ef4b0", 00:14:00.356 "is_configured": true, 00:14:00.356 "data_offset": 0, 00:14:00.356 "data_size": 65536 00:14:00.356 } 00:14:00.356 ] 00:14:00.356 }' 00:14:00.356 23:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:00.356 23:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.923 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:00.923 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:00.923 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.923 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:01.182 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:01.182 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.182 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:01.439 [2024-05-14 23:29:24.670439] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.783 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:01.783 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:01.783 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:01.783 23:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.783 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:01.783 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.783 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:02.040 [2024-05-14 23:29:25.241684] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:02.040 [2024-05-14 23:29:25.241749] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:14:02.297 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:02.297 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:02.297 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.298 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.555 BaseBdev2 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:02.555 23:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:02.813 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.072 [ 00:14:03.072 { 00:14:03.072 "name": "BaseBdev2", 00:14:03.072 "aliases": [ 00:14:03.072 "015b5dda-e131-4e06-b03c-16d7c3cc7c70" 00:14:03.072 ], 00:14:03.072 "product_name": "Malloc disk", 00:14:03.072 "block_size": 512, 00:14:03.072 "num_blocks": 65536, 00:14:03.072 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:03.072 "assigned_rate_limits": { 00:14:03.072 "rw_ios_per_sec": 0, 00:14:03.072 "rw_mbytes_per_sec": 0, 00:14:03.072 "r_mbytes_per_sec": 0, 00:14:03.072 "w_mbytes_per_sec": 0 00:14:03.072 }, 00:14:03.072 "claimed": false, 00:14:03.072 "zoned": false, 00:14:03.072 "supported_io_types": { 00:14:03.072 "read": true, 00:14:03.072 "write": true, 00:14:03.072 "unmap": true, 00:14:03.072 "write_zeroes": true, 00:14:03.072 "flush": true, 00:14:03.072 "reset": true, 00:14:03.072 "compare": false, 00:14:03.072 "compare_and_write": false, 00:14:03.072 "abort": true, 00:14:03.072 "nvme_admin": false, 00:14:03.072 "nvme_io": false 00:14:03.072 }, 00:14:03.072 "memory_domains": [ 00:14:03.072 { 00:14:03.072 "dma_device_id": "system", 00:14:03.072 "dma_device_type": 1 00:14:03.072 }, 00:14:03.072 { 00:14:03.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.072 "dma_device_type": 2 00:14:03.072 } 00:14:03.072 ], 00:14:03.072 "driver_specific": {} 00:14:03.072 } 00:14:03.072 ] 00:14:03.072 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:03.072 23:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:03.072 23:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:03.072 23:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:03.329 BaseBdev3 00:14:03.329 23:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:14:03.329 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:03.329 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:03.329 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:03.329 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:03.329 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:03.329 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.586 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:03.844 [ 00:14:03.844 { 00:14:03.844 "name": "BaseBdev3", 00:14:03.844 "aliases": [ 00:14:03.844 "58368ee0-a4cb-494a-89e3-ce0d6cb23030" 00:14:03.844 ], 00:14:03.844 "product_name": "Malloc disk", 00:14:03.844 "block_size": 512, 00:14:03.844 "num_blocks": 65536, 00:14:03.844 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:03.844 "assigned_rate_limits": { 00:14:03.844 "rw_ios_per_sec": 0, 00:14:03.844 "rw_mbytes_per_sec": 0, 00:14:03.844 "r_mbytes_per_sec": 0, 00:14:03.844 "w_mbytes_per_sec": 0 00:14:03.844 }, 00:14:03.844 "claimed": false, 00:14:03.844 "zoned": false, 00:14:03.844 "supported_io_types": { 00:14:03.844 "read": true, 00:14:03.844 "write": true, 00:14:03.844 "unmap": true, 00:14:03.844 "write_zeroes": true, 00:14:03.844 "flush": true, 00:14:03.844 "reset": true, 00:14:03.844 "compare": false, 00:14:03.844 "compare_and_write": false, 00:14:03.844 "abort": true, 00:14:03.844 "nvme_admin": false, 00:14:03.844 "nvme_io": false 00:14:03.844 }, 00:14:03.844 "memory_domains": [ 00:14:03.844 { 00:14:03.844 "dma_device_id": "system", 00:14:03.844 "dma_device_type": 1 00:14:03.844 }, 00:14:03.844 { 00:14:03.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.844 "dma_device_type": 2 00:14:03.844 } 00:14:03.844 ], 00:14:03.844 "driver_specific": {} 00:14:03.844 } 00:14:03.844 ] 00:14:03.844 23:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:03.844 23:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:03.844 23:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:03.844 23:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:04.102 [2024-05-14 23:29:27.201233] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:04.102 [2024-05-14 23:29:27.201336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:04.102 [2024-05-14 23:29:27.201361] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.102 [2024-05-14 23:29:27.202730] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.102 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.360 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:04.360 "name": "Existed_Raid", 00:14:04.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.360 "strip_size_kb": 64, 00:14:04.360 "state": "configuring", 00:14:04.360 "raid_level": "raid0", 00:14:04.360 "superblock": false, 00:14:04.360 "num_base_bdevs": 3, 00:14:04.360 "num_base_bdevs_discovered": 2, 00:14:04.360 "num_base_bdevs_operational": 3, 00:14:04.360 "base_bdevs_list": [ 00:14:04.360 { 00:14:04.360 "name": "BaseBdev1", 00:14:04.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.360 "is_configured": false, 00:14:04.360 "data_offset": 0, 00:14:04.360 "data_size": 0 00:14:04.360 }, 00:14:04.360 { 00:14:04.360 "name": "BaseBdev2", 00:14:04.360 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:04.360 "is_configured": true, 00:14:04.360 "data_offset": 0, 00:14:04.360 "data_size": 65536 00:14:04.360 }, 00:14:04.360 { 00:14:04.360 "name": "BaseBdev3", 00:14:04.360 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:04.360 "is_configured": true, 00:14:04.360 "data_offset": 0, 00:14:04.360 "data_size": 65536 00:14:04.360 } 00:14:04.360 ] 00:14:04.360 }' 00:14:04.360 23:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:04.360 23:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.926 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:05.242 [2024-05-14 23:29:28.297404] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.242 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.500 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:05.500 "name": "Existed_Raid", 00:14:05.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.500 "strip_size_kb": 64, 00:14:05.500 "state": "configuring", 00:14:05.500 "raid_level": "raid0", 00:14:05.500 "superblock": false, 00:14:05.500 "num_base_bdevs": 3, 00:14:05.500 "num_base_bdevs_discovered": 1, 00:14:05.500 "num_base_bdevs_operational": 3, 00:14:05.500 "base_bdevs_list": [ 00:14:05.500 { 00:14:05.500 "name": "BaseBdev1", 00:14:05.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.500 "is_configured": false, 00:14:05.500 "data_offset": 0, 00:14:05.500 "data_size": 0 00:14:05.500 }, 00:14:05.500 { 00:14:05.500 "name": null, 00:14:05.500 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:05.500 "is_configured": false, 00:14:05.500 "data_offset": 0, 00:14:05.500 "data_size": 65536 00:14:05.500 }, 00:14:05.500 { 00:14:05.500 "name": "BaseBdev3", 00:14:05.500 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:05.500 "is_configured": true, 00:14:05.500 "data_offset": 0, 00:14:05.500 "data_size": 65536 00:14:05.500 } 00:14:05.500 ] 00:14:05.500 }' 00:14:05.500 23:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:05.500 23:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.067 23:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.067 23:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:06.324 23:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:14:06.324 23:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:06.582 [2024-05-14 23:29:29.685004] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.582 BaseBdev1 00:14:06.582 23:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:14:06.582 23:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:06.582 23:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:06.582 23:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:06.582 23:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:06.582 23:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:06.582 23:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:06.841 23:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:06.841 [ 00:14:06.841 { 00:14:06.841 "name": "BaseBdev1", 00:14:06.841 "aliases": [ 00:14:06.841 "ebd6817f-4454-4863-960e-dbb1a441a18f" 00:14:06.841 ], 00:14:06.841 "product_name": "Malloc disk", 00:14:06.841 "block_size": 512, 00:14:06.841 "num_blocks": 65536, 00:14:06.841 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:06.841 "assigned_rate_limits": { 00:14:06.841 "rw_ios_per_sec": 0, 00:14:06.841 "rw_mbytes_per_sec": 0, 00:14:06.841 "r_mbytes_per_sec": 0, 00:14:06.841 "w_mbytes_per_sec": 0 00:14:06.841 }, 00:14:06.841 "claimed": true, 00:14:06.841 "claim_type": "exclusive_write", 00:14:06.841 "zoned": false, 00:14:06.841 "supported_io_types": { 00:14:06.841 "read": true, 00:14:06.841 "write": true, 00:14:06.841 "unmap": true, 00:14:06.841 "write_zeroes": true, 00:14:06.841 "flush": true, 00:14:06.841 "reset": true, 00:14:06.841 "compare": false, 00:14:06.841 "compare_and_write": false, 00:14:06.841 "abort": true, 00:14:06.841 "nvme_admin": false, 00:14:06.841 "nvme_io": false 00:14:06.841 }, 00:14:06.841 "memory_domains": [ 00:14:06.841 { 00:14:06.841 "dma_device_id": "system", 00:14:06.841 "dma_device_type": 1 00:14:06.841 }, 00:14:06.841 { 00:14:06.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.841 "dma_device_type": 2 00:14:06.841 } 00:14:06.841 ], 00:14:06.841 "driver_specific": {} 00:14:06.841 } 00:14:06.841 ] 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.841 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.098 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:07.098 "name": "Existed_Raid", 00:14:07.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.098 "strip_size_kb": 64, 00:14:07.098 "state": "configuring", 00:14:07.098 "raid_level": "raid0", 00:14:07.098 "superblock": false, 00:14:07.098 "num_base_bdevs": 3, 00:14:07.098 "num_base_bdevs_discovered": 2, 00:14:07.098 "num_base_bdevs_operational": 3, 00:14:07.098 "base_bdevs_list": [ 00:14:07.098 { 00:14:07.098 "name": "BaseBdev1", 00:14:07.098 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:07.098 "is_configured": true, 00:14:07.098 "data_offset": 0, 00:14:07.098 "data_size": 65536 00:14:07.098 }, 00:14:07.098 { 00:14:07.098 "name": null, 00:14:07.098 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:07.098 "is_configured": false, 00:14:07.098 "data_offset": 0, 00:14:07.098 "data_size": 65536 00:14:07.098 }, 00:14:07.098 { 00:14:07.098 "name": "BaseBdev3", 00:14:07.098 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:07.098 "is_configured": true, 00:14:07.098 "data_offset": 0, 00:14:07.098 "data_size": 65536 00:14:07.098 } 00:14:07.098 ] 00:14:07.098 }' 00:14:07.098 23:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:07.098 23:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.031 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.031 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:08.031 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:08.031 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:08.289 [2024-05-14 23:29:31.465405] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.289 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.547 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.547 "name": "Existed_Raid", 00:14:08.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.547 "strip_size_kb": 64, 00:14:08.547 "state": "configuring", 00:14:08.547 "raid_level": "raid0", 00:14:08.547 "superblock": false, 00:14:08.547 "num_base_bdevs": 3, 00:14:08.547 "num_base_bdevs_discovered": 1, 00:14:08.547 "num_base_bdevs_operational": 3, 00:14:08.547 "base_bdevs_list": [ 00:14:08.547 { 00:14:08.547 "name": "BaseBdev1", 00:14:08.547 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:08.547 "is_configured": true, 00:14:08.547 "data_offset": 0, 00:14:08.547 "data_size": 65536 00:14:08.547 }, 00:14:08.547 { 00:14:08.547 "name": null, 00:14:08.547 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:08.547 "is_configured": false, 00:14:08.547 "data_offset": 0, 00:14:08.547 "data_size": 65536 00:14:08.547 }, 00:14:08.547 { 00:14:08.547 "name": null, 00:14:08.547 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:08.547 "is_configured": false, 00:14:08.547 "data_offset": 0, 00:14:08.547 "data_size": 65536 00:14:08.547 } 00:14:08.547 ] 00:14:08.547 }' 00:14:08.547 23:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.547 23:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.481 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.481 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:09.481 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:14:09.481 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:09.739 [2024-05-14 23:29:32.853604] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.739 23:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.998 23:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.998 "name": "Existed_Raid", 00:14:09.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.998 "strip_size_kb": 64, 00:14:09.998 "state": "configuring", 00:14:09.998 "raid_level": "raid0", 00:14:09.998 "superblock": false, 00:14:09.998 "num_base_bdevs": 3, 00:14:09.998 "num_base_bdevs_discovered": 2, 00:14:09.998 "num_base_bdevs_operational": 3, 00:14:09.998 "base_bdevs_list": [ 00:14:09.998 { 00:14:09.998 "name": "BaseBdev1", 00:14:09.998 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:09.998 "is_configured": true, 00:14:09.998 "data_offset": 0, 00:14:09.998 "data_size": 65536 00:14:09.998 }, 00:14:09.998 { 00:14:09.998 "name": null, 00:14:09.998 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:09.998 "is_configured": false, 00:14:09.998 "data_offset": 0, 00:14:09.998 "data_size": 65536 00:14:09.998 }, 00:14:09.998 { 00:14:09.998 "name": "BaseBdev3", 00:14:09.998 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:09.998 "is_configured": true, 00:14:09.998 "data_offset": 0, 00:14:09.998 "data_size": 65536 00:14:09.998 } 00:14:09.998 ] 00:14:09.998 }' 00:14:09.998 23:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.998 23:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.564 23:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.564 23:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.823 23:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:14:10.823 23:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:11.081 [2024-05-14 23:29:34.149800] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.081 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.339 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:11.339 "name": "Existed_Raid", 00:14:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.339 "strip_size_kb": 64, 00:14:11.339 "state": "configuring", 00:14:11.339 "raid_level": "raid0", 00:14:11.339 "superblock": false, 00:14:11.339 "num_base_bdevs": 3, 00:14:11.339 "num_base_bdevs_discovered": 1, 00:14:11.339 "num_base_bdevs_operational": 3, 00:14:11.339 "base_bdevs_list": [ 00:14:11.339 { 00:14:11.339 "name": null, 00:14:11.339 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:11.339 "is_configured": false, 00:14:11.339 "data_offset": 0, 00:14:11.339 "data_size": 65536 00:14:11.339 }, 00:14:11.339 { 00:14:11.339 "name": null, 00:14:11.339 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:11.339 "is_configured": false, 00:14:11.339 "data_offset": 0, 00:14:11.339 "data_size": 65536 00:14:11.339 }, 00:14:11.339 { 00:14:11.339 "name": "BaseBdev3", 00:14:11.339 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:11.339 "is_configured": true, 00:14:11.339 "data_offset": 0, 00:14:11.339 "data_size": 65536 00:14:11.339 } 00:14:11.339 ] 00:14:11.339 }' 00:14:11.339 23:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:11.339 23:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.273 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.273 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:12.273 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:14:12.273 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:12.531 [2024-05-14 23:29:35.659484] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.531 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.790 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.790 "name": "Existed_Raid", 00:14:12.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.790 "strip_size_kb": 64, 00:14:12.790 "state": "configuring", 00:14:12.790 "raid_level": "raid0", 00:14:12.790 "superblock": false, 00:14:12.790 "num_base_bdevs": 3, 00:14:12.790 "num_base_bdevs_discovered": 2, 00:14:12.790 "num_base_bdevs_operational": 3, 00:14:12.790 "base_bdevs_list": [ 00:14:12.790 { 00:14:12.790 "name": null, 00:14:12.790 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:12.790 "is_configured": false, 00:14:12.790 "data_offset": 0, 00:14:12.790 "data_size": 65536 00:14:12.790 }, 00:14:12.790 { 00:14:12.790 "name": "BaseBdev2", 00:14:12.790 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:12.790 "is_configured": true, 00:14:12.790 "data_offset": 0, 00:14:12.790 "data_size": 65536 00:14:12.790 }, 00:14:12.790 { 00:14:12.790 "name": "BaseBdev3", 00:14:12.790 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:12.790 "is_configured": true, 00:14:12.790 "data_offset": 0, 00:14:12.790 "data_size": 65536 00:14:12.790 } 00:14:12.790 ] 00:14:12.790 }' 00:14:12.790 23:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.790 23:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.356 23:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.356 23:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:13.615 23:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:14:13.615 23:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.615 23:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:13.874 23:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ebd6817f-4454-4863-960e-dbb1a441a18f 00:14:14.134 [2024-05-14 23:29:37.244241] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:14.134 [2024-05-14 23:29:37.244286] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:14:14.134 [2024-05-14 23:29:37.244296] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:14.134 [2024-05-14 23:29:37.244395] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:14.134 [2024-05-14 23:29:37.244633] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:14:14.134 [2024-05-14 23:29:37.244648] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:14:14.134 [2024-05-14 23:29:37.244837] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.134 NewBaseBdev 00:14:14.134 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:14:14.134 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:14.134 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:14.134 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:14.134 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:14.134 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:14.134 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.392 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:14.651 [ 00:14:14.651 { 00:14:14.651 "name": "NewBaseBdev", 00:14:14.651 "aliases": [ 00:14:14.651 "ebd6817f-4454-4863-960e-dbb1a441a18f" 00:14:14.651 ], 00:14:14.651 "product_name": "Malloc disk", 00:14:14.651 "block_size": 512, 00:14:14.651 "num_blocks": 65536, 00:14:14.651 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:14.651 "assigned_rate_limits": { 00:14:14.651 "rw_ios_per_sec": 0, 00:14:14.651 "rw_mbytes_per_sec": 0, 00:14:14.651 "r_mbytes_per_sec": 0, 00:14:14.651 "w_mbytes_per_sec": 0 00:14:14.651 }, 00:14:14.651 "claimed": true, 00:14:14.651 "claim_type": "exclusive_write", 00:14:14.651 "zoned": false, 00:14:14.651 "supported_io_types": { 00:14:14.651 "read": true, 00:14:14.651 "write": true, 00:14:14.651 "unmap": true, 00:14:14.651 "write_zeroes": true, 00:14:14.651 "flush": true, 00:14:14.651 "reset": true, 00:14:14.651 "compare": false, 00:14:14.651 "compare_and_write": false, 00:14:14.651 "abort": true, 00:14:14.651 "nvme_admin": false, 00:14:14.651 "nvme_io": false 00:14:14.651 }, 00:14:14.651 "memory_domains": [ 00:14:14.651 { 00:14:14.651 "dma_device_id": "system", 00:14:14.651 "dma_device_type": 1 00:14:14.651 }, 00:14:14.651 { 00:14:14.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.651 "dma_device_type": 2 00:14:14.651 } 00:14:14.651 ], 00:14:14.651 "driver_specific": {} 00:14:14.651 } 00:14:14.651 ] 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.651 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.909 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:14.909 "name": "Existed_Raid", 00:14:14.909 "uuid": "4e73f08b-59f3-4724-b8b2-117ee87300bb", 00:14:14.909 "strip_size_kb": 64, 00:14:14.909 "state": "online", 00:14:14.909 "raid_level": "raid0", 00:14:14.909 "superblock": false, 00:14:14.909 "num_base_bdevs": 3, 00:14:14.909 "num_base_bdevs_discovered": 3, 00:14:14.909 "num_base_bdevs_operational": 3, 00:14:14.909 "base_bdevs_list": [ 00:14:14.909 { 00:14:14.909 "name": "NewBaseBdev", 00:14:14.909 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:14.909 "is_configured": true, 00:14:14.909 "data_offset": 0, 00:14:14.909 "data_size": 65536 00:14:14.909 }, 00:14:14.909 { 00:14:14.909 "name": "BaseBdev2", 00:14:14.909 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:14.909 "is_configured": true, 00:14:14.909 "data_offset": 0, 00:14:14.909 "data_size": 65536 00:14:14.909 }, 00:14:14.909 { 00:14:14.909 "name": "BaseBdev3", 00:14:14.909 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:14.909 "is_configured": true, 00:14:14.909 "data_offset": 0, 00:14:14.909 "data_size": 65536 00:14:14.909 } 00:14:14.909 ] 00:14:14.909 }' 00:14:14.909 23:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:14.909 23:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.475 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:14:15.475 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:15.475 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:15.475 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:15.475 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:15.475 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:15.475 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:15.475 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:15.733 [2024-05-14 23:29:38.780787] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.733 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:15.733 "name": "Existed_Raid", 00:14:15.733 "aliases": [ 00:14:15.733 "4e73f08b-59f3-4724-b8b2-117ee87300bb" 00:14:15.733 ], 00:14:15.733 "product_name": "Raid Volume", 00:14:15.733 "block_size": 512, 00:14:15.733 "num_blocks": 196608, 00:14:15.733 "uuid": "4e73f08b-59f3-4724-b8b2-117ee87300bb", 00:14:15.733 "assigned_rate_limits": { 00:14:15.733 "rw_ios_per_sec": 0, 00:14:15.733 "rw_mbytes_per_sec": 0, 00:14:15.733 "r_mbytes_per_sec": 0, 00:14:15.733 "w_mbytes_per_sec": 0 00:14:15.733 }, 00:14:15.733 "claimed": false, 00:14:15.733 "zoned": false, 00:14:15.733 "supported_io_types": { 00:14:15.733 "read": true, 00:14:15.733 "write": true, 00:14:15.733 "unmap": true, 00:14:15.733 "write_zeroes": true, 00:14:15.733 "flush": true, 00:14:15.733 "reset": true, 00:14:15.733 "compare": false, 00:14:15.733 "compare_and_write": false, 00:14:15.733 "abort": false, 00:14:15.733 "nvme_admin": false, 00:14:15.733 "nvme_io": false 00:14:15.733 }, 00:14:15.733 "memory_domains": [ 00:14:15.733 { 00:14:15.733 "dma_device_id": "system", 00:14:15.733 "dma_device_type": 1 00:14:15.733 }, 00:14:15.733 { 00:14:15.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.733 "dma_device_type": 2 00:14:15.733 }, 00:14:15.733 { 00:14:15.733 "dma_device_id": "system", 00:14:15.733 "dma_device_type": 1 00:14:15.733 }, 00:14:15.733 { 00:14:15.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.733 "dma_device_type": 2 00:14:15.733 }, 00:14:15.733 { 00:14:15.733 "dma_device_id": "system", 00:14:15.733 "dma_device_type": 1 00:14:15.733 }, 00:14:15.733 { 00:14:15.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.733 "dma_device_type": 2 00:14:15.733 } 00:14:15.733 ], 00:14:15.733 "driver_specific": { 00:14:15.733 "raid": { 00:14:15.733 "uuid": "4e73f08b-59f3-4724-b8b2-117ee87300bb", 00:14:15.733 "strip_size_kb": 64, 00:14:15.733 "state": "online", 00:14:15.733 "raid_level": "raid0", 00:14:15.734 "superblock": false, 00:14:15.734 "num_base_bdevs": 3, 00:14:15.734 "num_base_bdevs_discovered": 3, 00:14:15.734 "num_base_bdevs_operational": 3, 00:14:15.734 "base_bdevs_list": [ 00:14:15.734 { 00:14:15.734 "name": "NewBaseBdev", 00:14:15.734 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:15.734 "is_configured": true, 00:14:15.734 "data_offset": 0, 00:14:15.734 "data_size": 65536 00:14:15.734 }, 00:14:15.734 { 00:14:15.734 "name": "BaseBdev2", 00:14:15.734 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:15.734 "is_configured": true, 00:14:15.734 "data_offset": 0, 00:14:15.734 "data_size": 65536 00:14:15.734 }, 00:14:15.734 { 00:14:15.734 "name": "BaseBdev3", 00:14:15.734 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:15.734 "is_configured": true, 00:14:15.734 "data_offset": 0, 00:14:15.734 "data_size": 65536 00:14:15.734 } 00:14:15.734 ] 00:14:15.734 } 00:14:15.734 } 00:14:15.734 }' 00:14:15.734 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.734 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:14:15.734 BaseBdev2 00:14:15.734 BaseBdev3' 00:14:15.734 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:15.734 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:15.734 23:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:15.992 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:15.992 "name": "NewBaseBdev", 00:14:15.992 "aliases": [ 00:14:15.992 "ebd6817f-4454-4863-960e-dbb1a441a18f" 00:14:15.992 ], 00:14:15.992 "product_name": "Malloc disk", 00:14:15.992 "block_size": 512, 00:14:15.992 "num_blocks": 65536, 00:14:15.992 "uuid": "ebd6817f-4454-4863-960e-dbb1a441a18f", 00:14:15.992 "assigned_rate_limits": { 00:14:15.992 "rw_ios_per_sec": 0, 00:14:15.992 "rw_mbytes_per_sec": 0, 00:14:15.992 "r_mbytes_per_sec": 0, 00:14:15.992 "w_mbytes_per_sec": 0 00:14:15.992 }, 00:14:15.992 "claimed": true, 00:14:15.992 "claim_type": "exclusive_write", 00:14:15.992 "zoned": false, 00:14:15.992 "supported_io_types": { 00:14:15.992 "read": true, 00:14:15.992 "write": true, 00:14:15.992 "unmap": true, 00:14:15.992 "write_zeroes": true, 00:14:15.992 "flush": true, 00:14:15.992 "reset": true, 00:14:15.992 "compare": false, 00:14:15.992 "compare_and_write": false, 00:14:15.992 "abort": true, 00:14:15.992 "nvme_admin": false, 00:14:15.992 "nvme_io": false 00:14:15.992 }, 00:14:15.992 "memory_domains": [ 00:14:15.992 { 00:14:15.992 "dma_device_id": "system", 00:14:15.992 "dma_device_type": 1 00:14:15.992 }, 00:14:15.992 { 00:14:15.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.992 "dma_device_type": 2 00:14:15.992 } 00:14:15.992 ], 00:14:15.992 "driver_specific": {} 00:14:15.992 }' 00:14:15.992 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:15.992 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:15.992 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:15.992 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:15.992 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:16.250 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.250 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:16.250 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:16.250 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.250 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:16.250 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:16.509 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:16.509 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:16.509 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:16.509 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:16.767 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:16.767 "name": "BaseBdev2", 00:14:16.767 "aliases": [ 00:14:16.767 "015b5dda-e131-4e06-b03c-16d7c3cc7c70" 00:14:16.767 ], 00:14:16.767 "product_name": "Malloc disk", 00:14:16.767 "block_size": 512, 00:14:16.767 "num_blocks": 65536, 00:14:16.767 "uuid": "015b5dda-e131-4e06-b03c-16d7c3cc7c70", 00:14:16.767 "assigned_rate_limits": { 00:14:16.767 "rw_ios_per_sec": 0, 00:14:16.767 "rw_mbytes_per_sec": 0, 00:14:16.767 "r_mbytes_per_sec": 0, 00:14:16.767 "w_mbytes_per_sec": 0 00:14:16.767 }, 00:14:16.767 "claimed": true, 00:14:16.767 "claim_type": "exclusive_write", 00:14:16.767 "zoned": false, 00:14:16.767 "supported_io_types": { 00:14:16.767 "read": true, 00:14:16.767 "write": true, 00:14:16.767 "unmap": true, 00:14:16.767 "write_zeroes": true, 00:14:16.767 "flush": true, 00:14:16.767 "reset": true, 00:14:16.767 "compare": false, 00:14:16.767 "compare_and_write": false, 00:14:16.767 "abort": true, 00:14:16.767 "nvme_admin": false, 00:14:16.767 "nvme_io": false 00:14:16.767 }, 00:14:16.767 "memory_domains": [ 00:14:16.767 { 00:14:16.767 "dma_device_id": "system", 00:14:16.767 "dma_device_type": 1 00:14:16.767 }, 00:14:16.767 { 00:14:16.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.767 "dma_device_type": 2 00:14:16.767 } 00:14:16.767 ], 00:14:16.767 "driver_specific": {} 00:14:16.767 }' 00:14:16.767 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:16.767 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:16.767 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:16.767 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:16.767 23:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:16.767 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.767 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:17.025 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:17.025 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.025 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:17.025 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:17.025 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:17.025 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:17.025 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:17.025 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:17.283 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:17.283 "name": "BaseBdev3", 00:14:17.283 "aliases": [ 00:14:17.283 "58368ee0-a4cb-494a-89e3-ce0d6cb23030" 00:14:17.283 ], 00:14:17.283 "product_name": "Malloc disk", 00:14:17.283 "block_size": 512, 00:14:17.283 "num_blocks": 65536, 00:14:17.283 "uuid": "58368ee0-a4cb-494a-89e3-ce0d6cb23030", 00:14:17.283 "assigned_rate_limits": { 00:14:17.283 "rw_ios_per_sec": 0, 00:14:17.283 "rw_mbytes_per_sec": 0, 00:14:17.283 "r_mbytes_per_sec": 0, 00:14:17.283 "w_mbytes_per_sec": 0 00:14:17.283 }, 00:14:17.283 "claimed": true, 00:14:17.283 "claim_type": "exclusive_write", 00:14:17.283 "zoned": false, 00:14:17.283 "supported_io_types": { 00:14:17.283 "read": true, 00:14:17.283 "write": true, 00:14:17.283 "unmap": true, 00:14:17.283 "write_zeroes": true, 00:14:17.283 "flush": true, 00:14:17.284 "reset": true, 00:14:17.284 "compare": false, 00:14:17.284 "compare_and_write": false, 00:14:17.284 "abort": true, 00:14:17.284 "nvme_admin": false, 00:14:17.284 "nvme_io": false 00:14:17.284 }, 00:14:17.284 "memory_domains": [ 00:14:17.284 { 00:14:17.284 "dma_device_id": "system", 00:14:17.284 "dma_device_type": 1 00:14:17.284 }, 00:14:17.284 { 00:14:17.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.284 "dma_device_type": 2 00:14:17.284 } 00:14:17.284 ], 00:14:17.284 "driver_specific": {} 00:14:17.284 }' 00:14:17.284 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:17.284 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:17.542 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:17.542 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:17.542 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:17.542 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.542 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:17.542 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:17.542 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.542 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:17.801 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:17.801 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:17.801 23:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:17.801 [2024-05-14 23:29:41.064906] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.801 [2024-05-14 23:29:41.064949] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.801 [2024-05-14 23:29:41.065021] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.801 [2024-05-14 23:29:41.065062] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.801 [2024-05-14 23:29:41.065074] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:14:17.801 23:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 56418 00:14:17.801 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 56418 ']' 00:14:17.801 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 56418 00:14:17.801 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:14:17.801 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:17.801 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 56418 00:14:18.129 killing process with pid 56418 00:14:18.129 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:18.129 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:18.129 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 56418' 00:14:18.129 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 56418 00:14:18.129 23:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 56418 00:14:18.129 [2024-05-14 23:29:41.096868] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.129 [2024-05-14 23:29:41.345052] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:14:19.505 00:14:19.505 real 0m30.446s 00:14:19.505 user 0m57.400s 00:14:19.505 sys 0m3.047s 00:14:19.505 ************************************ 00:14:19.505 END TEST raid_state_function_test 00:14:19.505 ************************************ 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.505 23:29:42 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:19.505 23:29:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:19.505 23:29:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.505 23:29:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.505 ************************************ 00:14:19.505 START TEST raid_state_function_test_sb 00:14:19.505 ************************************ 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:14:19.505 Process raid pid: 57418 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=57418 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 57418' 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 57418 /var/tmp/spdk-raid.sock 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 57418 ']' 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:19.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:19.505 23:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.505 [2024-05-14 23:29:42.790344] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:14:19.505 [2024-05-14 23:29:42.790561] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.763 [2024-05-14 23:29:42.961635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.021 [2024-05-14 23:29:43.178440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.280 [2024-05-14 23:29:43.381700] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.538 23:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:20.538 23:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:14:20.538 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:20.538 [2024-05-14 23:29:43.821853] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.538 [2024-05-14 23:29:43.821940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.538 [2024-05-14 23:29:43.821956] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.538 [2024-05-14 23:29:43.821977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.538 [2024-05-14 23:29:43.821987] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.538 [2024-05-14 23:29:43.822032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.795 23:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.052 23:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.052 "name": "Existed_Raid", 00:14:21.052 "uuid": "a69555d1-4616-4893-bfef-bac0cabcd507", 00:14:21.052 "strip_size_kb": 64, 00:14:21.052 "state": "configuring", 00:14:21.052 "raid_level": "raid0", 00:14:21.052 "superblock": true, 00:14:21.052 "num_base_bdevs": 3, 00:14:21.052 "num_base_bdevs_discovered": 0, 00:14:21.052 "num_base_bdevs_operational": 3, 00:14:21.052 "base_bdevs_list": [ 00:14:21.052 { 00:14:21.052 "name": "BaseBdev1", 00:14:21.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.052 "is_configured": false, 00:14:21.052 "data_offset": 0, 00:14:21.052 "data_size": 0 00:14:21.052 }, 00:14:21.052 { 00:14:21.052 "name": "BaseBdev2", 00:14:21.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.052 "is_configured": false, 00:14:21.052 "data_offset": 0, 00:14:21.052 "data_size": 0 00:14:21.052 }, 00:14:21.052 { 00:14:21.052 "name": "BaseBdev3", 00:14:21.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.052 "is_configured": false, 00:14:21.052 "data_offset": 0, 00:14:21.052 "data_size": 0 00:14:21.052 } 00:14:21.052 ] 00:14:21.052 }' 00:14:21.052 23:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.052 23:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.687 23:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:21.946 [2024-05-14 23:29:45.025850] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.946 [2024-05-14 23:29:45.025898] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:14:21.946 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:21.946 [2024-05-14 23:29:45.217917] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.946 [2024-05-14 23:29:45.218001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.946 [2024-05-14 23:29:45.218016] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.946 [2024-05-14 23:29:45.218046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.946 [2024-05-14 23:29:45.218057] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.946 [2024-05-14 23:29:45.218082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.203 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.203 [2024-05-14 23:29:45.445900] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.203 BaseBdev1 00:14:22.203 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:14:22.203 23:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:22.203 23:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:22.203 23:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:22.203 23:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:22.203 23:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:22.203 23:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.460 23:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.718 [ 00:14:22.718 { 00:14:22.718 "name": "BaseBdev1", 00:14:22.718 "aliases": [ 00:14:22.718 "389beb59-8705-4530-a9a6-b090c5444f36" 00:14:22.718 ], 00:14:22.718 "product_name": "Malloc disk", 00:14:22.718 "block_size": 512, 00:14:22.718 "num_blocks": 65536, 00:14:22.718 "uuid": "389beb59-8705-4530-a9a6-b090c5444f36", 00:14:22.718 "assigned_rate_limits": { 00:14:22.718 "rw_ios_per_sec": 0, 00:14:22.718 "rw_mbytes_per_sec": 0, 00:14:22.718 "r_mbytes_per_sec": 0, 00:14:22.718 "w_mbytes_per_sec": 0 00:14:22.718 }, 00:14:22.718 "claimed": true, 00:14:22.718 "claim_type": "exclusive_write", 00:14:22.718 "zoned": false, 00:14:22.718 "supported_io_types": { 00:14:22.718 "read": true, 00:14:22.718 "write": true, 00:14:22.718 "unmap": true, 00:14:22.718 "write_zeroes": true, 00:14:22.718 "flush": true, 00:14:22.718 "reset": true, 00:14:22.718 "compare": false, 00:14:22.718 "compare_and_write": false, 00:14:22.718 "abort": true, 00:14:22.718 "nvme_admin": false, 00:14:22.718 "nvme_io": false 00:14:22.718 }, 00:14:22.718 "memory_domains": [ 00:14:22.718 { 00:14:22.718 "dma_device_id": "system", 00:14:22.718 "dma_device_type": 1 00:14:22.718 }, 00:14:22.718 { 00:14:22.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.718 "dma_device_type": 2 00:14:22.718 } 00:14:22.718 ], 00:14:22.718 "driver_specific": {} 00:14:22.718 } 00:14:22.718 ] 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.718 23:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.976 23:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.976 "name": "Existed_Raid", 00:14:22.976 "uuid": "c6fb1528-6437-4a93-abdd-7fc66f9e3fe4", 00:14:22.976 "strip_size_kb": 64, 00:14:22.976 "state": "configuring", 00:14:22.976 "raid_level": "raid0", 00:14:22.976 "superblock": true, 00:14:22.976 "num_base_bdevs": 3, 00:14:22.976 "num_base_bdevs_discovered": 1, 00:14:22.976 "num_base_bdevs_operational": 3, 00:14:22.976 "base_bdevs_list": [ 00:14:22.976 { 00:14:22.976 "name": "BaseBdev1", 00:14:22.976 "uuid": "389beb59-8705-4530-a9a6-b090c5444f36", 00:14:22.976 "is_configured": true, 00:14:22.976 "data_offset": 2048, 00:14:22.976 "data_size": 63488 00:14:22.976 }, 00:14:22.976 { 00:14:22.976 "name": "BaseBdev2", 00:14:22.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.976 "is_configured": false, 00:14:22.976 "data_offset": 0, 00:14:22.976 "data_size": 0 00:14:22.976 }, 00:14:22.976 { 00:14:22.976 "name": "BaseBdev3", 00:14:22.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.976 "is_configured": false, 00:14:22.976 "data_offset": 0, 00:14:22.976 "data_size": 0 00:14:22.976 } 00:14:22.976 ] 00:14:22.976 }' 00:14:22.976 23:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.976 23:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.541 23:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:23.800 [2024-05-14 23:29:46.938142] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.800 [2024-05-14 23:29:46.938204] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:14:23.800 23:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:24.058 [2024-05-14 23:29:47.170251] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.058 [2024-05-14 23:29:47.171848] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.058 [2024-05-14 23:29:47.171911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.058 [2024-05-14 23:29:47.171925] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:24.058 [2024-05-14 23:29:47.171954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.058 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.316 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.317 "name": "Existed_Raid", 00:14:24.317 "uuid": "34fedab9-cc13-4b88-bde3-0b3bc7ccfc52", 00:14:24.317 "strip_size_kb": 64, 00:14:24.317 "state": "configuring", 00:14:24.317 "raid_level": "raid0", 00:14:24.317 "superblock": true, 00:14:24.317 "num_base_bdevs": 3, 00:14:24.317 "num_base_bdevs_discovered": 1, 00:14:24.317 "num_base_bdevs_operational": 3, 00:14:24.317 "base_bdevs_list": [ 00:14:24.317 { 00:14:24.317 "name": "BaseBdev1", 00:14:24.317 "uuid": "389beb59-8705-4530-a9a6-b090c5444f36", 00:14:24.317 "is_configured": true, 00:14:24.317 "data_offset": 2048, 00:14:24.317 "data_size": 63488 00:14:24.317 }, 00:14:24.317 { 00:14:24.317 "name": "BaseBdev2", 00:14:24.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.317 "is_configured": false, 00:14:24.317 "data_offset": 0, 00:14:24.317 "data_size": 0 00:14:24.317 }, 00:14:24.317 { 00:14:24.317 "name": "BaseBdev3", 00:14:24.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.317 "is_configured": false, 00:14:24.317 "data_offset": 0, 00:14:24.317 "data_size": 0 00:14:24.317 } 00:14:24.317 ] 00:14:24.317 }' 00:14:24.317 23:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.317 23:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.883 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:25.179 BaseBdev2 00:14:25.179 [2024-05-14 23:29:48.315424] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.179 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:14:25.179 23:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:25.179 23:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:25.179 23:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:25.179 23:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:25.179 23:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:25.179 23:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:25.436 23:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:25.694 [ 00:14:25.694 { 00:14:25.694 "name": "BaseBdev2", 00:14:25.694 "aliases": [ 00:14:25.694 "fbcdd18f-d152-41f8-8923-176beed3d5b8" 00:14:25.694 ], 00:14:25.694 "product_name": "Malloc disk", 00:14:25.694 "block_size": 512, 00:14:25.694 "num_blocks": 65536, 00:14:25.694 "uuid": "fbcdd18f-d152-41f8-8923-176beed3d5b8", 00:14:25.694 "assigned_rate_limits": { 00:14:25.694 "rw_ios_per_sec": 0, 00:14:25.694 "rw_mbytes_per_sec": 0, 00:14:25.694 "r_mbytes_per_sec": 0, 00:14:25.694 "w_mbytes_per_sec": 0 00:14:25.694 }, 00:14:25.694 "claimed": true, 00:14:25.694 "claim_type": "exclusive_write", 00:14:25.694 "zoned": false, 00:14:25.694 "supported_io_types": { 00:14:25.694 "read": true, 00:14:25.694 "write": true, 00:14:25.694 "unmap": true, 00:14:25.694 "write_zeroes": true, 00:14:25.694 "flush": true, 00:14:25.694 "reset": true, 00:14:25.694 "compare": false, 00:14:25.694 "compare_and_write": false, 00:14:25.694 "abort": true, 00:14:25.694 "nvme_admin": false, 00:14:25.694 "nvme_io": false 00:14:25.694 }, 00:14:25.694 "memory_domains": [ 00:14:25.694 { 00:14:25.694 "dma_device_id": "system", 00:14:25.694 "dma_device_type": 1 00:14:25.694 }, 00:14:25.694 { 00:14:25.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.694 "dma_device_type": 2 00:14:25.694 } 00:14:25.694 ], 00:14:25.694 "driver_specific": {} 00:14:25.694 } 00:14:25.694 ] 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.694 23:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.951 23:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.951 "name": "Existed_Raid", 00:14:25.951 "uuid": "34fedab9-cc13-4b88-bde3-0b3bc7ccfc52", 00:14:25.951 "strip_size_kb": 64, 00:14:25.951 "state": "configuring", 00:14:25.951 "raid_level": "raid0", 00:14:25.951 "superblock": true, 00:14:25.951 "num_base_bdevs": 3, 00:14:25.951 "num_base_bdevs_discovered": 2, 00:14:25.951 "num_base_bdevs_operational": 3, 00:14:25.951 "base_bdevs_list": [ 00:14:25.951 { 00:14:25.951 "name": "BaseBdev1", 00:14:25.951 "uuid": "389beb59-8705-4530-a9a6-b090c5444f36", 00:14:25.951 "is_configured": true, 00:14:25.951 "data_offset": 2048, 00:14:25.951 "data_size": 63488 00:14:25.951 }, 00:14:25.951 { 00:14:25.951 "name": "BaseBdev2", 00:14:25.951 "uuid": "fbcdd18f-d152-41f8-8923-176beed3d5b8", 00:14:25.951 "is_configured": true, 00:14:25.951 "data_offset": 2048, 00:14:25.951 "data_size": 63488 00:14:25.951 }, 00:14:25.951 { 00:14:25.951 "name": "BaseBdev3", 00:14:25.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.951 "is_configured": false, 00:14:25.951 "data_offset": 0, 00:14:25.951 "data_size": 0 00:14:25.951 } 00:14:25.951 ] 00:14:25.951 }' 00:14:25.951 23:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.951 23:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.517 23:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:26.775 [2024-05-14 23:29:49.932661] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.775 [2024-05-14 23:29:49.932847] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:14:26.775 [2024-05-14 23:29:49.932864] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:26.775 [2024-05-14 23:29:49.932973] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:26.775 BaseBdev3 00:14:26.775 [2024-05-14 23:29:49.933474] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:14:26.775 [2024-05-14 23:29:49.933497] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:14:26.775 [2024-05-14 23:29:49.933626] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.775 23:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:14:26.775 23:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:26.775 23:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:26.775 23:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:26.775 23:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:26.775 23:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:26.775 23:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:27.032 23:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:27.289 [ 00:14:27.289 { 00:14:27.289 "name": "BaseBdev3", 00:14:27.289 "aliases": [ 00:14:27.289 "281c0529-8484-4df0-b4ef-e702e7f401c1" 00:14:27.289 ], 00:14:27.289 "product_name": "Malloc disk", 00:14:27.289 "block_size": 512, 00:14:27.289 "num_blocks": 65536, 00:14:27.289 "uuid": "281c0529-8484-4df0-b4ef-e702e7f401c1", 00:14:27.289 "assigned_rate_limits": { 00:14:27.289 "rw_ios_per_sec": 0, 00:14:27.289 "rw_mbytes_per_sec": 0, 00:14:27.289 "r_mbytes_per_sec": 0, 00:14:27.289 "w_mbytes_per_sec": 0 00:14:27.289 }, 00:14:27.289 "claimed": true, 00:14:27.289 "claim_type": "exclusive_write", 00:14:27.289 "zoned": false, 00:14:27.289 "supported_io_types": { 00:14:27.289 "read": true, 00:14:27.289 "write": true, 00:14:27.289 "unmap": true, 00:14:27.289 "write_zeroes": true, 00:14:27.289 "flush": true, 00:14:27.289 "reset": true, 00:14:27.289 "compare": false, 00:14:27.289 "compare_and_write": false, 00:14:27.289 "abort": true, 00:14:27.289 "nvme_admin": false, 00:14:27.289 "nvme_io": false 00:14:27.289 }, 00:14:27.289 "memory_domains": [ 00:14:27.289 { 00:14:27.289 "dma_device_id": "system", 00:14:27.289 "dma_device_type": 1 00:14:27.289 }, 00:14:27.289 { 00:14:27.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.289 "dma_device_type": 2 00:14:27.289 } 00:14:27.289 ], 00:14:27.289 "driver_specific": {} 00:14:27.289 } 00:14:27.289 ] 00:14:27.289 23:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:27.289 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:14:27.289 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.290 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.547 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.547 "name": "Existed_Raid", 00:14:27.547 "uuid": "34fedab9-cc13-4b88-bde3-0b3bc7ccfc52", 00:14:27.547 "strip_size_kb": 64, 00:14:27.547 "state": "online", 00:14:27.547 "raid_level": "raid0", 00:14:27.547 "superblock": true, 00:14:27.547 "num_base_bdevs": 3, 00:14:27.547 "num_base_bdevs_discovered": 3, 00:14:27.547 "num_base_bdevs_operational": 3, 00:14:27.547 "base_bdevs_list": [ 00:14:27.547 { 00:14:27.547 "name": "BaseBdev1", 00:14:27.547 "uuid": "389beb59-8705-4530-a9a6-b090c5444f36", 00:14:27.547 "is_configured": true, 00:14:27.547 "data_offset": 2048, 00:14:27.547 "data_size": 63488 00:14:27.547 }, 00:14:27.547 { 00:14:27.547 "name": "BaseBdev2", 00:14:27.547 "uuid": "fbcdd18f-d152-41f8-8923-176beed3d5b8", 00:14:27.547 "is_configured": true, 00:14:27.547 "data_offset": 2048, 00:14:27.547 "data_size": 63488 00:14:27.547 }, 00:14:27.547 { 00:14:27.547 "name": "BaseBdev3", 00:14:27.547 "uuid": "281c0529-8484-4df0-b4ef-e702e7f401c1", 00:14:27.547 "is_configured": true, 00:14:27.547 "data_offset": 2048, 00:14:27.547 "data_size": 63488 00:14:27.547 } 00:14:27.547 ] 00:14:27.547 }' 00:14:27.547 23:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.547 23:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.112 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:14:28.112 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:28.112 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:28.112 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:28.112 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:28.112 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:14:28.112 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:28.112 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:28.391 [2024-05-14 23:29:51.445100] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.391 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:28.391 "name": "Existed_Raid", 00:14:28.391 "aliases": [ 00:14:28.391 "34fedab9-cc13-4b88-bde3-0b3bc7ccfc52" 00:14:28.391 ], 00:14:28.391 "product_name": "Raid Volume", 00:14:28.391 "block_size": 512, 00:14:28.391 "num_blocks": 190464, 00:14:28.391 "uuid": "34fedab9-cc13-4b88-bde3-0b3bc7ccfc52", 00:14:28.391 "assigned_rate_limits": { 00:14:28.391 "rw_ios_per_sec": 0, 00:14:28.391 "rw_mbytes_per_sec": 0, 00:14:28.391 "r_mbytes_per_sec": 0, 00:14:28.391 "w_mbytes_per_sec": 0 00:14:28.391 }, 00:14:28.391 "claimed": false, 00:14:28.391 "zoned": false, 00:14:28.391 "supported_io_types": { 00:14:28.391 "read": true, 00:14:28.391 "write": true, 00:14:28.391 "unmap": true, 00:14:28.391 "write_zeroes": true, 00:14:28.391 "flush": true, 00:14:28.391 "reset": true, 00:14:28.391 "compare": false, 00:14:28.391 "compare_and_write": false, 00:14:28.391 "abort": false, 00:14:28.391 "nvme_admin": false, 00:14:28.391 "nvme_io": false 00:14:28.391 }, 00:14:28.391 "memory_domains": [ 00:14:28.391 { 00:14:28.391 "dma_device_id": "system", 00:14:28.391 "dma_device_type": 1 00:14:28.391 }, 00:14:28.391 { 00:14:28.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.391 "dma_device_type": 2 00:14:28.391 }, 00:14:28.391 { 00:14:28.391 "dma_device_id": "system", 00:14:28.391 "dma_device_type": 1 00:14:28.391 }, 00:14:28.391 { 00:14:28.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.391 "dma_device_type": 2 00:14:28.391 }, 00:14:28.391 { 00:14:28.391 "dma_device_id": "system", 00:14:28.391 "dma_device_type": 1 00:14:28.391 }, 00:14:28.391 { 00:14:28.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.391 "dma_device_type": 2 00:14:28.391 } 00:14:28.391 ], 00:14:28.391 "driver_specific": { 00:14:28.391 "raid": { 00:14:28.391 "uuid": "34fedab9-cc13-4b88-bde3-0b3bc7ccfc52", 00:14:28.391 "strip_size_kb": 64, 00:14:28.391 "state": "online", 00:14:28.391 "raid_level": "raid0", 00:14:28.391 "superblock": true, 00:14:28.391 "num_base_bdevs": 3, 00:14:28.391 "num_base_bdevs_discovered": 3, 00:14:28.391 "num_base_bdevs_operational": 3, 00:14:28.391 "base_bdevs_list": [ 00:14:28.391 { 00:14:28.391 "name": "BaseBdev1", 00:14:28.391 "uuid": "389beb59-8705-4530-a9a6-b090c5444f36", 00:14:28.391 "is_configured": true, 00:14:28.391 "data_offset": 2048, 00:14:28.392 "data_size": 63488 00:14:28.392 }, 00:14:28.392 { 00:14:28.392 "name": "BaseBdev2", 00:14:28.392 "uuid": "fbcdd18f-d152-41f8-8923-176beed3d5b8", 00:14:28.392 "is_configured": true, 00:14:28.392 "data_offset": 2048, 00:14:28.392 "data_size": 63488 00:14:28.392 }, 00:14:28.392 { 00:14:28.392 "name": "BaseBdev3", 00:14:28.392 "uuid": "281c0529-8484-4df0-b4ef-e702e7f401c1", 00:14:28.392 "is_configured": true, 00:14:28.392 "data_offset": 2048, 00:14:28.392 "data_size": 63488 00:14:28.392 } 00:14:28.392 ] 00:14:28.392 } 00:14:28.392 } 00:14:28.392 }' 00:14:28.392 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.392 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:14:28.392 BaseBdev2 00:14:28.392 BaseBdev3' 00:14:28.392 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:28.392 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:28.392 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:28.650 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:28.650 "name": "BaseBdev1", 00:14:28.650 "aliases": [ 00:14:28.650 "389beb59-8705-4530-a9a6-b090c5444f36" 00:14:28.650 ], 00:14:28.650 "product_name": "Malloc disk", 00:14:28.650 "block_size": 512, 00:14:28.650 "num_blocks": 65536, 00:14:28.650 "uuid": "389beb59-8705-4530-a9a6-b090c5444f36", 00:14:28.650 "assigned_rate_limits": { 00:14:28.650 "rw_ios_per_sec": 0, 00:14:28.650 "rw_mbytes_per_sec": 0, 00:14:28.650 "r_mbytes_per_sec": 0, 00:14:28.650 "w_mbytes_per_sec": 0 00:14:28.650 }, 00:14:28.650 "claimed": true, 00:14:28.650 "claim_type": "exclusive_write", 00:14:28.650 "zoned": false, 00:14:28.650 "supported_io_types": { 00:14:28.650 "read": true, 00:14:28.650 "write": true, 00:14:28.650 "unmap": true, 00:14:28.650 "write_zeroes": true, 00:14:28.650 "flush": true, 00:14:28.650 "reset": true, 00:14:28.650 "compare": false, 00:14:28.650 "compare_and_write": false, 00:14:28.650 "abort": true, 00:14:28.650 "nvme_admin": false, 00:14:28.650 "nvme_io": false 00:14:28.650 }, 00:14:28.650 "memory_domains": [ 00:14:28.650 { 00:14:28.650 "dma_device_id": "system", 00:14:28.650 "dma_device_type": 1 00:14:28.650 }, 00:14:28.650 { 00:14:28.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.650 "dma_device_type": 2 00:14:28.650 } 00:14:28.650 ], 00:14:28.650 "driver_specific": {} 00:14:28.650 }' 00:14:28.650 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:28.650 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:28.650 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:28.650 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:28.650 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:28.908 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.908 23:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:28.908 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:28.908 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:28.908 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:28.908 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:28.908 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:28.908 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:28.908 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:28.908 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:29.168 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:29.168 "name": "BaseBdev2", 00:14:29.168 "aliases": [ 00:14:29.168 "fbcdd18f-d152-41f8-8923-176beed3d5b8" 00:14:29.168 ], 00:14:29.168 "product_name": "Malloc disk", 00:14:29.168 "block_size": 512, 00:14:29.168 "num_blocks": 65536, 00:14:29.168 "uuid": "fbcdd18f-d152-41f8-8923-176beed3d5b8", 00:14:29.168 "assigned_rate_limits": { 00:14:29.168 "rw_ios_per_sec": 0, 00:14:29.168 "rw_mbytes_per_sec": 0, 00:14:29.168 "r_mbytes_per_sec": 0, 00:14:29.168 "w_mbytes_per_sec": 0 00:14:29.168 }, 00:14:29.168 "claimed": true, 00:14:29.168 "claim_type": "exclusive_write", 00:14:29.168 "zoned": false, 00:14:29.168 "supported_io_types": { 00:14:29.168 "read": true, 00:14:29.168 "write": true, 00:14:29.168 "unmap": true, 00:14:29.168 "write_zeroes": true, 00:14:29.168 "flush": true, 00:14:29.168 "reset": true, 00:14:29.168 "compare": false, 00:14:29.168 "compare_and_write": false, 00:14:29.168 "abort": true, 00:14:29.168 "nvme_admin": false, 00:14:29.168 "nvme_io": false 00:14:29.168 }, 00:14:29.168 "memory_domains": [ 00:14:29.168 { 00:14:29.168 "dma_device_id": "system", 00:14:29.168 "dma_device_type": 1 00:14:29.168 }, 00:14:29.168 { 00:14:29.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.168 "dma_device_type": 2 00:14:29.168 } 00:14:29.168 ], 00:14:29.168 "driver_specific": {} 00:14:29.168 }' 00:14:29.168 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:29.427 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:29.427 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:29.427 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:29.427 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:29.427 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:29.427 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:29.427 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:29.685 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:29.685 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:29.685 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:29.685 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:29.685 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:29.685 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:29.685 23:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:29.944 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:29.944 "name": "BaseBdev3", 00:14:29.944 "aliases": [ 00:14:29.944 "281c0529-8484-4df0-b4ef-e702e7f401c1" 00:14:29.944 ], 00:14:29.944 "product_name": "Malloc disk", 00:14:29.944 "block_size": 512, 00:14:29.944 "num_blocks": 65536, 00:14:29.944 "uuid": "281c0529-8484-4df0-b4ef-e702e7f401c1", 00:14:29.944 "assigned_rate_limits": { 00:14:29.944 "rw_ios_per_sec": 0, 00:14:29.944 "rw_mbytes_per_sec": 0, 00:14:29.944 "r_mbytes_per_sec": 0, 00:14:29.944 "w_mbytes_per_sec": 0 00:14:29.944 }, 00:14:29.944 "claimed": true, 00:14:29.944 "claim_type": "exclusive_write", 00:14:29.944 "zoned": false, 00:14:29.944 "supported_io_types": { 00:14:29.944 "read": true, 00:14:29.944 "write": true, 00:14:29.944 "unmap": true, 00:14:29.944 "write_zeroes": true, 00:14:29.944 "flush": true, 00:14:29.944 "reset": true, 00:14:29.944 "compare": false, 00:14:29.944 "compare_and_write": false, 00:14:29.944 "abort": true, 00:14:29.944 "nvme_admin": false, 00:14:29.944 "nvme_io": false 00:14:29.944 }, 00:14:29.944 "memory_domains": [ 00:14:29.944 { 00:14:29.944 "dma_device_id": "system", 00:14:29.944 "dma_device_type": 1 00:14:29.944 }, 00:14:29.944 { 00:14:29.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.944 "dma_device_type": 2 00:14:29.944 } 00:14:29.944 ], 00:14:29.944 "driver_specific": {} 00:14:29.944 }' 00:14:29.944 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:29.944 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:29.944 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:29.944 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:30.202 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:30.202 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.202 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:30.202 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:30.202 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.202 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:30.202 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:30.461 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:30.461 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:30.461 [2024-05-14 23:29:53.721367] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.461 [2024-05-14 23:29:53.721415] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.461 [2024-05-14 23:29:53.721459] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.720 23:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.978 23:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.978 "name": "Existed_Raid", 00:14:30.978 "uuid": "34fedab9-cc13-4b88-bde3-0b3bc7ccfc52", 00:14:30.978 "strip_size_kb": 64, 00:14:30.978 "state": "offline", 00:14:30.978 "raid_level": "raid0", 00:14:30.978 "superblock": true, 00:14:30.978 "num_base_bdevs": 3, 00:14:30.978 "num_base_bdevs_discovered": 2, 00:14:30.978 "num_base_bdevs_operational": 2, 00:14:30.978 "base_bdevs_list": [ 00:14:30.978 { 00:14:30.978 "name": null, 00:14:30.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.978 "is_configured": false, 00:14:30.978 "data_offset": 2048, 00:14:30.978 "data_size": 63488 00:14:30.978 }, 00:14:30.978 { 00:14:30.978 "name": "BaseBdev2", 00:14:30.978 "uuid": "fbcdd18f-d152-41f8-8923-176beed3d5b8", 00:14:30.978 "is_configured": true, 00:14:30.978 "data_offset": 2048, 00:14:30.978 "data_size": 63488 00:14:30.978 }, 00:14:30.978 { 00:14:30.978 "name": "BaseBdev3", 00:14:30.978 "uuid": "281c0529-8484-4df0-b4ef-e702e7f401c1", 00:14:30.978 "is_configured": true, 00:14:30.978 "data_offset": 2048, 00:14:30.978 "data_size": 63488 00:14:30.978 } 00:14:30.978 ] 00:14:30.978 }' 00:14:30.978 23:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.978 23:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.543 23:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:31.543 23:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.543 23:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.543 23:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:31.800 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:31.800 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:31.800 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:32.128 [2024-05-14 23:29:55.195145] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:32.128 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:32.128 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:32.128 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.128 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:14:32.386 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:14:32.386 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.386 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:32.645 [2024-05-14 23:29:55.686367] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:32.645 [2024-05-14 23:29:55.686431] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:14:32.645 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:32.645 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:32.645 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.645 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:14:32.904 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:14:32.904 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:14:32.904 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:14:32.904 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:14:32.904 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:32.904 23:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:33.163 BaseBdev2 00:14:33.163 23:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:14:33.163 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:33.163 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:33.163 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:33.163 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:33.163 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:33.163 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:33.163 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:33.421 [ 00:14:33.421 { 00:14:33.421 "name": "BaseBdev2", 00:14:33.421 "aliases": [ 00:14:33.421 "ccf1302d-d6a6-4dc0-a043-e8cba0de008c" 00:14:33.421 ], 00:14:33.421 "product_name": "Malloc disk", 00:14:33.421 "block_size": 512, 00:14:33.421 "num_blocks": 65536, 00:14:33.421 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:33.421 "assigned_rate_limits": { 00:14:33.421 "rw_ios_per_sec": 0, 00:14:33.421 "rw_mbytes_per_sec": 0, 00:14:33.421 "r_mbytes_per_sec": 0, 00:14:33.421 "w_mbytes_per_sec": 0 00:14:33.421 }, 00:14:33.421 "claimed": false, 00:14:33.421 "zoned": false, 00:14:33.421 "supported_io_types": { 00:14:33.421 "read": true, 00:14:33.421 "write": true, 00:14:33.421 "unmap": true, 00:14:33.421 "write_zeroes": true, 00:14:33.421 "flush": true, 00:14:33.421 "reset": true, 00:14:33.421 "compare": false, 00:14:33.422 "compare_and_write": false, 00:14:33.422 "abort": true, 00:14:33.422 "nvme_admin": false, 00:14:33.422 "nvme_io": false 00:14:33.422 }, 00:14:33.422 "memory_domains": [ 00:14:33.422 { 00:14:33.422 "dma_device_id": "system", 00:14:33.422 "dma_device_type": 1 00:14:33.422 }, 00:14:33.422 { 00:14:33.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.422 "dma_device_type": 2 00:14:33.422 } 00:14:33.422 ], 00:14:33.422 "driver_specific": {} 00:14:33.422 } 00:14:33.422 ] 00:14:33.422 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:33.422 23:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:33.422 23:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:33.422 23:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.681 BaseBdev3 00:14:33.681 23:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:14:33.681 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:14:33.681 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:33.681 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:33.681 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:33.681 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:33.681 23:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:33.939 23:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:34.197 [ 00:14:34.197 { 00:14:34.197 "name": "BaseBdev3", 00:14:34.197 "aliases": [ 00:14:34.197 "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2" 00:14:34.197 ], 00:14:34.197 "product_name": "Malloc disk", 00:14:34.197 "block_size": 512, 00:14:34.197 "num_blocks": 65536, 00:14:34.197 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:34.197 "assigned_rate_limits": { 00:14:34.197 "rw_ios_per_sec": 0, 00:14:34.197 "rw_mbytes_per_sec": 0, 00:14:34.197 "r_mbytes_per_sec": 0, 00:14:34.197 "w_mbytes_per_sec": 0 00:14:34.197 }, 00:14:34.197 "claimed": false, 00:14:34.197 "zoned": false, 00:14:34.197 "supported_io_types": { 00:14:34.197 "read": true, 00:14:34.197 "write": true, 00:14:34.197 "unmap": true, 00:14:34.197 "write_zeroes": true, 00:14:34.197 "flush": true, 00:14:34.197 "reset": true, 00:14:34.197 "compare": false, 00:14:34.197 "compare_and_write": false, 00:14:34.197 "abort": true, 00:14:34.197 "nvme_admin": false, 00:14:34.197 "nvme_io": false 00:14:34.197 }, 00:14:34.197 "memory_domains": [ 00:14:34.197 { 00:14:34.197 "dma_device_id": "system", 00:14:34.197 "dma_device_type": 1 00:14:34.197 }, 00:14:34.197 { 00:14:34.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.197 "dma_device_type": 2 00:14:34.197 } 00:14:34.197 ], 00:14:34.197 "driver_specific": {} 00:14:34.197 } 00:14:34.197 ] 00:14:34.197 23:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:34.197 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:14:34.197 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:14:34.197 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:34.454 [2024-05-14 23:29:57.541994] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.454 [2024-05-14 23:29:57.542089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.454 [2024-05-14 23:29:57.542114] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.454 [2024-05-14 23:29:57.543737] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.454 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.711 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.711 "name": "Existed_Raid", 00:14:34.711 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:34.711 "strip_size_kb": 64, 00:14:34.711 "state": "configuring", 00:14:34.711 "raid_level": "raid0", 00:14:34.711 "superblock": true, 00:14:34.711 "num_base_bdevs": 3, 00:14:34.711 "num_base_bdevs_discovered": 2, 00:14:34.711 "num_base_bdevs_operational": 3, 00:14:34.711 "base_bdevs_list": [ 00:14:34.711 { 00:14:34.711 "name": "BaseBdev1", 00:14:34.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.711 "is_configured": false, 00:14:34.712 "data_offset": 0, 00:14:34.712 "data_size": 0 00:14:34.712 }, 00:14:34.712 { 00:14:34.712 "name": "BaseBdev2", 00:14:34.712 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:34.712 "is_configured": true, 00:14:34.712 "data_offset": 2048, 00:14:34.712 "data_size": 63488 00:14:34.712 }, 00:14:34.712 { 00:14:34.712 "name": "BaseBdev3", 00:14:34.712 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:34.712 "is_configured": true, 00:14:34.712 "data_offset": 2048, 00:14:34.712 "data_size": 63488 00:14:34.712 } 00:14:34.712 ] 00:14:34.712 }' 00:14:34.712 23:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.712 23:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.277 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:35.562 [2024-05-14 23:29:58.674116] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.562 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.873 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.873 "name": "Existed_Raid", 00:14:35.873 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:35.873 "strip_size_kb": 64, 00:14:35.873 "state": "configuring", 00:14:35.873 "raid_level": "raid0", 00:14:35.873 "superblock": true, 00:14:35.874 "num_base_bdevs": 3, 00:14:35.874 "num_base_bdevs_discovered": 1, 00:14:35.874 "num_base_bdevs_operational": 3, 00:14:35.874 "base_bdevs_list": [ 00:14:35.874 { 00:14:35.874 "name": "BaseBdev1", 00:14:35.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.874 "is_configured": false, 00:14:35.874 "data_offset": 0, 00:14:35.874 "data_size": 0 00:14:35.874 }, 00:14:35.874 { 00:14:35.874 "name": null, 00:14:35.874 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:35.874 "is_configured": false, 00:14:35.874 "data_offset": 2048, 00:14:35.874 "data_size": 63488 00:14:35.874 }, 00:14:35.874 { 00:14:35.874 "name": "BaseBdev3", 00:14:35.874 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:35.874 "is_configured": true, 00:14:35.874 "data_offset": 2048, 00:14:35.874 "data_size": 63488 00:14:35.874 } 00:14:35.874 ] 00:14:35.874 }' 00:14:35.874 23:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.874 23:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.439 23:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.440 23:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:36.697 23:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:14:36.697 23:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.955 [2024-05-14 23:30:00.081705] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.955 BaseBdev1 00:14:36.955 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:14:36.955 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:36.955 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:36.955 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:36.955 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:36.955 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:36.955 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.212 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:37.470 [ 00:14:37.470 { 00:14:37.470 "name": "BaseBdev1", 00:14:37.470 "aliases": [ 00:14:37.470 "133313c4-7d84-47b5-82d2-1353d15c75cf" 00:14:37.470 ], 00:14:37.470 "product_name": "Malloc disk", 00:14:37.470 "block_size": 512, 00:14:37.470 "num_blocks": 65536, 00:14:37.470 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:37.470 "assigned_rate_limits": { 00:14:37.470 "rw_ios_per_sec": 0, 00:14:37.470 "rw_mbytes_per_sec": 0, 00:14:37.470 "r_mbytes_per_sec": 0, 00:14:37.470 "w_mbytes_per_sec": 0 00:14:37.470 }, 00:14:37.470 "claimed": true, 00:14:37.470 "claim_type": "exclusive_write", 00:14:37.470 "zoned": false, 00:14:37.470 "supported_io_types": { 00:14:37.470 "read": true, 00:14:37.470 "write": true, 00:14:37.470 "unmap": true, 00:14:37.470 "write_zeroes": true, 00:14:37.470 "flush": true, 00:14:37.470 "reset": true, 00:14:37.470 "compare": false, 00:14:37.470 "compare_and_write": false, 00:14:37.470 "abort": true, 00:14:37.470 "nvme_admin": false, 00:14:37.470 "nvme_io": false 00:14:37.470 }, 00:14:37.470 "memory_domains": [ 00:14:37.470 { 00:14:37.470 "dma_device_id": "system", 00:14:37.470 "dma_device_type": 1 00:14:37.470 }, 00:14:37.470 { 00:14:37.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.470 "dma_device_type": 2 00:14:37.470 } 00:14:37.470 ], 00:14:37.470 "driver_specific": {} 00:14:37.470 } 00:14:37.470 ] 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.470 "name": "Existed_Raid", 00:14:37.470 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:37.470 "strip_size_kb": 64, 00:14:37.470 "state": "configuring", 00:14:37.470 "raid_level": "raid0", 00:14:37.470 "superblock": true, 00:14:37.470 "num_base_bdevs": 3, 00:14:37.470 "num_base_bdevs_discovered": 2, 00:14:37.470 "num_base_bdevs_operational": 3, 00:14:37.470 "base_bdevs_list": [ 00:14:37.470 { 00:14:37.470 "name": "BaseBdev1", 00:14:37.470 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:37.470 "is_configured": true, 00:14:37.470 "data_offset": 2048, 00:14:37.470 "data_size": 63488 00:14:37.470 }, 00:14:37.470 { 00:14:37.470 "name": null, 00:14:37.470 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:37.470 "is_configured": false, 00:14:37.470 "data_offset": 2048, 00:14:37.470 "data_size": 63488 00:14:37.470 }, 00:14:37.470 { 00:14:37.470 "name": "BaseBdev3", 00:14:37.470 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:37.470 "is_configured": true, 00:14:37.470 "data_offset": 2048, 00:14:37.470 "data_size": 63488 00:14:37.470 } 00:14:37.470 ] 00:14:37.470 }' 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.470 23:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.401 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.401 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:38.659 [2024-05-14 23:30:01.914110] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.659 23:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.916 23:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.916 "name": "Existed_Raid", 00:14:38.916 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:38.916 "strip_size_kb": 64, 00:14:38.916 "state": "configuring", 00:14:38.916 "raid_level": "raid0", 00:14:38.916 "superblock": true, 00:14:38.916 "num_base_bdevs": 3, 00:14:38.916 "num_base_bdevs_discovered": 1, 00:14:38.916 "num_base_bdevs_operational": 3, 00:14:38.916 "base_bdevs_list": [ 00:14:38.916 { 00:14:38.916 "name": "BaseBdev1", 00:14:38.916 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:38.916 "is_configured": true, 00:14:38.916 "data_offset": 2048, 00:14:38.916 "data_size": 63488 00:14:38.916 }, 00:14:38.916 { 00:14:38.916 "name": null, 00:14:38.916 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:38.916 "is_configured": false, 00:14:38.916 "data_offset": 2048, 00:14:38.916 "data_size": 63488 00:14:38.916 }, 00:14:38.916 { 00:14:38.916 "name": null, 00:14:38.916 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:38.917 "is_configured": false, 00:14:38.917 "data_offset": 2048, 00:14:38.917 "data_size": 63488 00:14:38.917 } 00:14:38.917 ] 00:14:38.917 }' 00:14:38.917 23:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.917 23:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.481 23:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:39.481 23:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.738 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:14:39.738 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:39.996 [2024-05-14 23:30:03.230298] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.996 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.253 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:40.253 "name": "Existed_Raid", 00:14:40.253 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:40.253 "strip_size_kb": 64, 00:14:40.253 "state": "configuring", 00:14:40.253 "raid_level": "raid0", 00:14:40.254 "superblock": true, 00:14:40.254 "num_base_bdevs": 3, 00:14:40.254 "num_base_bdevs_discovered": 2, 00:14:40.254 "num_base_bdevs_operational": 3, 00:14:40.254 "base_bdevs_list": [ 00:14:40.254 { 00:14:40.254 "name": "BaseBdev1", 00:14:40.254 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:40.254 "is_configured": true, 00:14:40.254 "data_offset": 2048, 00:14:40.254 "data_size": 63488 00:14:40.254 }, 00:14:40.254 { 00:14:40.254 "name": null, 00:14:40.254 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:40.254 "is_configured": false, 00:14:40.254 "data_offset": 2048, 00:14:40.254 "data_size": 63488 00:14:40.254 }, 00:14:40.254 { 00:14:40.254 "name": "BaseBdev3", 00:14:40.254 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:40.254 "is_configured": true, 00:14:40.254 "data_offset": 2048, 00:14:40.254 "data_size": 63488 00:14:40.254 } 00:14:40.254 ] 00:14:40.254 }' 00:14:40.254 23:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:40.254 23:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.185 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.185 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:41.185 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:14:41.185 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:41.443 [2024-05-14 23:30:04.590556] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.443 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.700 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.700 "name": "Existed_Raid", 00:14:41.700 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:41.700 "strip_size_kb": 64, 00:14:41.700 "state": "configuring", 00:14:41.700 "raid_level": "raid0", 00:14:41.700 "superblock": true, 00:14:41.700 "num_base_bdevs": 3, 00:14:41.700 "num_base_bdevs_discovered": 1, 00:14:41.700 "num_base_bdevs_operational": 3, 00:14:41.700 "base_bdevs_list": [ 00:14:41.700 { 00:14:41.700 "name": null, 00:14:41.700 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:41.700 "is_configured": false, 00:14:41.700 "data_offset": 2048, 00:14:41.700 "data_size": 63488 00:14:41.700 }, 00:14:41.700 { 00:14:41.700 "name": null, 00:14:41.700 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:41.700 "is_configured": false, 00:14:41.700 "data_offset": 2048, 00:14:41.700 "data_size": 63488 00:14:41.700 }, 00:14:41.700 { 00:14:41.700 "name": "BaseBdev3", 00:14:41.700 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:41.700 "is_configured": true, 00:14:41.700 "data_offset": 2048, 00:14:41.700 "data_size": 63488 00:14:41.700 } 00:14:41.700 ] 00:14:41.700 }' 00:14:41.700 23:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.700 23:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.633 23:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.633 23:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:42.633 23:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:14:42.633 23:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:42.926 [2024-05-14 23:30:06.139893] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.926 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.184 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:43.184 "name": "Existed_Raid", 00:14:43.184 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:43.184 "strip_size_kb": 64, 00:14:43.184 "state": "configuring", 00:14:43.184 "raid_level": "raid0", 00:14:43.184 "superblock": true, 00:14:43.184 "num_base_bdevs": 3, 00:14:43.184 "num_base_bdevs_discovered": 2, 00:14:43.184 "num_base_bdevs_operational": 3, 00:14:43.184 "base_bdevs_list": [ 00:14:43.184 { 00:14:43.184 "name": null, 00:14:43.184 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:43.184 "is_configured": false, 00:14:43.184 "data_offset": 2048, 00:14:43.184 "data_size": 63488 00:14:43.184 }, 00:14:43.184 { 00:14:43.184 "name": "BaseBdev2", 00:14:43.184 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:43.184 "is_configured": true, 00:14:43.184 "data_offset": 2048, 00:14:43.184 "data_size": 63488 00:14:43.184 }, 00:14:43.184 { 00:14:43.184 "name": "BaseBdev3", 00:14:43.184 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:43.184 "is_configured": true, 00:14:43.184 "data_offset": 2048, 00:14:43.184 "data_size": 63488 00:14:43.184 } 00:14:43.184 ] 00:14:43.184 }' 00:14:43.184 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:43.184 23:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.749 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.749 23:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:44.006 23:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:14:44.006 23:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.006 23:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:44.265 23:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 133313c4-7d84-47b5-82d2-1353d15c75cf 00:14:44.522 NewBaseBdev 00:14:44.522 [2024-05-14 23:30:07.659248] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:44.522 [2024-05-14 23:30:07.659437] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:14:44.522 [2024-05-14 23:30:07.659454] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:44.522 [2024-05-14 23:30:07.659535] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:44.522 [2024-05-14 23:30:07.659760] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:14:44.522 [2024-05-14 23:30:07.659777] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:14:44.522 [2024-05-14 23:30:07.659875] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.522 23:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:14:44.522 23:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:14:44.522 23:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:44.522 23:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:44.522 23:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:44.522 23:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:44.522 23:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:44.780 23:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:45.038 [ 00:14:45.038 { 00:14:45.038 "name": "NewBaseBdev", 00:14:45.038 "aliases": [ 00:14:45.038 "133313c4-7d84-47b5-82d2-1353d15c75cf" 00:14:45.038 ], 00:14:45.038 "product_name": "Malloc disk", 00:14:45.038 "block_size": 512, 00:14:45.038 "num_blocks": 65536, 00:14:45.038 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:45.038 "assigned_rate_limits": { 00:14:45.038 "rw_ios_per_sec": 0, 00:14:45.038 "rw_mbytes_per_sec": 0, 00:14:45.038 "r_mbytes_per_sec": 0, 00:14:45.038 "w_mbytes_per_sec": 0 00:14:45.038 }, 00:14:45.039 "claimed": true, 00:14:45.039 "claim_type": "exclusive_write", 00:14:45.039 "zoned": false, 00:14:45.039 "supported_io_types": { 00:14:45.039 "read": true, 00:14:45.039 "write": true, 00:14:45.039 "unmap": true, 00:14:45.039 "write_zeroes": true, 00:14:45.039 "flush": true, 00:14:45.039 "reset": true, 00:14:45.039 "compare": false, 00:14:45.039 "compare_and_write": false, 00:14:45.039 "abort": true, 00:14:45.039 "nvme_admin": false, 00:14:45.039 "nvme_io": false 00:14:45.039 }, 00:14:45.039 "memory_domains": [ 00:14:45.039 { 00:14:45.039 "dma_device_id": "system", 00:14:45.039 "dma_device_type": 1 00:14:45.039 }, 00:14:45.039 { 00:14:45.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.039 "dma_device_type": 2 00:14:45.039 } 00:14:45.039 ], 00:14:45.039 "driver_specific": {} 00:14:45.039 } 00:14:45.039 ] 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.039 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.297 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.297 "name": "Existed_Raid", 00:14:45.297 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:45.297 "strip_size_kb": 64, 00:14:45.297 "state": "online", 00:14:45.297 "raid_level": "raid0", 00:14:45.297 "superblock": true, 00:14:45.297 "num_base_bdevs": 3, 00:14:45.297 "num_base_bdevs_discovered": 3, 00:14:45.297 "num_base_bdevs_operational": 3, 00:14:45.297 "base_bdevs_list": [ 00:14:45.297 { 00:14:45.297 "name": "NewBaseBdev", 00:14:45.297 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:45.297 "is_configured": true, 00:14:45.297 "data_offset": 2048, 00:14:45.297 "data_size": 63488 00:14:45.297 }, 00:14:45.297 { 00:14:45.297 "name": "BaseBdev2", 00:14:45.297 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:45.297 "is_configured": true, 00:14:45.297 "data_offset": 2048, 00:14:45.297 "data_size": 63488 00:14:45.297 }, 00:14:45.297 { 00:14:45.297 "name": "BaseBdev3", 00:14:45.297 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:45.297 "is_configured": true, 00:14:45.297 "data_offset": 2048, 00:14:45.297 "data_size": 63488 00:14:45.297 } 00:14:45.297 ] 00:14:45.297 }' 00:14:45.297 23:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.297 23:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.864 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:14:45.864 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:14:45.864 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:45.864 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:45.864 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:45.864 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:14:45.864 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:45.864 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:46.123 [2024-05-14 23:30:09.267945] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.123 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:46.123 "name": "Existed_Raid", 00:14:46.123 "aliases": [ 00:14:46.123 "cbe26981-6798-48fa-a350-d1a00d8aee66" 00:14:46.123 ], 00:14:46.123 "product_name": "Raid Volume", 00:14:46.123 "block_size": 512, 00:14:46.123 "num_blocks": 190464, 00:14:46.123 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:46.123 "assigned_rate_limits": { 00:14:46.123 "rw_ios_per_sec": 0, 00:14:46.123 "rw_mbytes_per_sec": 0, 00:14:46.123 "r_mbytes_per_sec": 0, 00:14:46.123 "w_mbytes_per_sec": 0 00:14:46.123 }, 00:14:46.123 "claimed": false, 00:14:46.123 "zoned": false, 00:14:46.123 "supported_io_types": { 00:14:46.123 "read": true, 00:14:46.123 "write": true, 00:14:46.123 "unmap": true, 00:14:46.123 "write_zeroes": true, 00:14:46.123 "flush": true, 00:14:46.123 "reset": true, 00:14:46.123 "compare": false, 00:14:46.123 "compare_and_write": false, 00:14:46.123 "abort": false, 00:14:46.123 "nvme_admin": false, 00:14:46.123 "nvme_io": false 00:14:46.123 }, 00:14:46.123 "memory_domains": [ 00:14:46.123 { 00:14:46.123 "dma_device_id": "system", 00:14:46.123 "dma_device_type": 1 00:14:46.123 }, 00:14:46.123 { 00:14:46.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.123 "dma_device_type": 2 00:14:46.123 }, 00:14:46.123 { 00:14:46.123 "dma_device_id": "system", 00:14:46.123 "dma_device_type": 1 00:14:46.123 }, 00:14:46.123 { 00:14:46.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.123 "dma_device_type": 2 00:14:46.123 }, 00:14:46.123 { 00:14:46.123 "dma_device_id": "system", 00:14:46.123 "dma_device_type": 1 00:14:46.123 }, 00:14:46.123 { 00:14:46.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.123 "dma_device_type": 2 00:14:46.123 } 00:14:46.123 ], 00:14:46.123 "driver_specific": { 00:14:46.123 "raid": { 00:14:46.123 "uuid": "cbe26981-6798-48fa-a350-d1a00d8aee66", 00:14:46.123 "strip_size_kb": 64, 00:14:46.123 "state": "online", 00:14:46.123 "raid_level": "raid0", 00:14:46.123 "superblock": true, 00:14:46.123 "num_base_bdevs": 3, 00:14:46.123 "num_base_bdevs_discovered": 3, 00:14:46.123 "num_base_bdevs_operational": 3, 00:14:46.123 "base_bdevs_list": [ 00:14:46.123 { 00:14:46.123 "name": "NewBaseBdev", 00:14:46.123 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:46.123 "is_configured": true, 00:14:46.123 "data_offset": 2048, 00:14:46.123 "data_size": 63488 00:14:46.123 }, 00:14:46.123 { 00:14:46.123 "name": "BaseBdev2", 00:14:46.123 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:46.123 "is_configured": true, 00:14:46.123 "data_offset": 2048, 00:14:46.123 "data_size": 63488 00:14:46.123 }, 00:14:46.123 { 00:14:46.123 "name": "BaseBdev3", 00:14:46.123 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:46.123 "is_configured": true, 00:14:46.123 "data_offset": 2048, 00:14:46.123 "data_size": 63488 00:14:46.123 } 00:14:46.123 ] 00:14:46.123 } 00:14:46.123 } 00:14:46.123 }' 00:14:46.123 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:46.123 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:14:46.123 BaseBdev2 00:14:46.123 BaseBdev3' 00:14:46.123 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:46.123 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:46.123 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:46.381 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:46.381 "name": "NewBaseBdev", 00:14:46.381 "aliases": [ 00:14:46.381 "133313c4-7d84-47b5-82d2-1353d15c75cf" 00:14:46.381 ], 00:14:46.381 "product_name": "Malloc disk", 00:14:46.381 "block_size": 512, 00:14:46.381 "num_blocks": 65536, 00:14:46.381 "uuid": "133313c4-7d84-47b5-82d2-1353d15c75cf", 00:14:46.381 "assigned_rate_limits": { 00:14:46.381 "rw_ios_per_sec": 0, 00:14:46.381 "rw_mbytes_per_sec": 0, 00:14:46.381 "r_mbytes_per_sec": 0, 00:14:46.381 "w_mbytes_per_sec": 0 00:14:46.381 }, 00:14:46.381 "claimed": true, 00:14:46.381 "claim_type": "exclusive_write", 00:14:46.381 "zoned": false, 00:14:46.381 "supported_io_types": { 00:14:46.381 "read": true, 00:14:46.381 "write": true, 00:14:46.381 "unmap": true, 00:14:46.381 "write_zeroes": true, 00:14:46.381 "flush": true, 00:14:46.381 "reset": true, 00:14:46.381 "compare": false, 00:14:46.381 "compare_and_write": false, 00:14:46.381 "abort": true, 00:14:46.381 "nvme_admin": false, 00:14:46.381 "nvme_io": false 00:14:46.381 }, 00:14:46.381 "memory_domains": [ 00:14:46.381 { 00:14:46.381 "dma_device_id": "system", 00:14:46.381 "dma_device_type": 1 00:14:46.381 }, 00:14:46.381 { 00:14:46.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.381 "dma_device_type": 2 00:14:46.381 } 00:14:46.381 ], 00:14:46.381 "driver_specific": {} 00:14:46.381 }' 00:14:46.381 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:46.381 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:46.696 23:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:46.955 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:46.955 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:46.955 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:46.955 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:47.213 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:47.213 "name": "BaseBdev2", 00:14:47.213 "aliases": [ 00:14:47.213 "ccf1302d-d6a6-4dc0-a043-e8cba0de008c" 00:14:47.213 ], 00:14:47.213 "product_name": "Malloc disk", 00:14:47.213 "block_size": 512, 00:14:47.213 "num_blocks": 65536, 00:14:47.213 "uuid": "ccf1302d-d6a6-4dc0-a043-e8cba0de008c", 00:14:47.213 "assigned_rate_limits": { 00:14:47.213 "rw_ios_per_sec": 0, 00:14:47.213 "rw_mbytes_per_sec": 0, 00:14:47.213 "r_mbytes_per_sec": 0, 00:14:47.213 "w_mbytes_per_sec": 0 00:14:47.213 }, 00:14:47.213 "claimed": true, 00:14:47.213 "claim_type": "exclusive_write", 00:14:47.213 "zoned": false, 00:14:47.213 "supported_io_types": { 00:14:47.213 "read": true, 00:14:47.213 "write": true, 00:14:47.213 "unmap": true, 00:14:47.213 "write_zeroes": true, 00:14:47.213 "flush": true, 00:14:47.213 "reset": true, 00:14:47.213 "compare": false, 00:14:47.213 "compare_and_write": false, 00:14:47.213 "abort": true, 00:14:47.213 "nvme_admin": false, 00:14:47.213 "nvme_io": false 00:14:47.213 }, 00:14:47.213 "memory_domains": [ 00:14:47.213 { 00:14:47.213 "dma_device_id": "system", 00:14:47.213 "dma_device_type": 1 00:14:47.213 }, 00:14:47.213 { 00:14:47.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.213 "dma_device_type": 2 00:14:47.213 } 00:14:47.213 ], 00:14:47.213 "driver_specific": {} 00:14:47.213 }' 00:14:47.213 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:47.213 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:47.213 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:47.214 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:47.214 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:47.214 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:47.214 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:47.471 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:47.471 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:47.471 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:47.471 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:47.471 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:47.471 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:47.471 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:47.471 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:47.729 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:47.729 "name": "BaseBdev3", 00:14:47.729 "aliases": [ 00:14:47.729 "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2" 00:14:47.729 ], 00:14:47.729 "product_name": "Malloc disk", 00:14:47.729 "block_size": 512, 00:14:47.729 "num_blocks": 65536, 00:14:47.729 "uuid": "7cd5730a-61a2-4b96-a4bf-fc88dde1b7a2", 00:14:47.729 "assigned_rate_limits": { 00:14:47.729 "rw_ios_per_sec": 0, 00:14:47.729 "rw_mbytes_per_sec": 0, 00:14:47.729 "r_mbytes_per_sec": 0, 00:14:47.729 "w_mbytes_per_sec": 0 00:14:47.729 }, 00:14:47.729 "claimed": true, 00:14:47.729 "claim_type": "exclusive_write", 00:14:47.729 "zoned": false, 00:14:47.729 "supported_io_types": { 00:14:47.729 "read": true, 00:14:47.729 "write": true, 00:14:47.729 "unmap": true, 00:14:47.729 "write_zeroes": true, 00:14:47.729 "flush": true, 00:14:47.729 "reset": true, 00:14:47.729 "compare": false, 00:14:47.729 "compare_and_write": false, 00:14:47.729 "abort": true, 00:14:47.729 "nvme_admin": false, 00:14:47.729 "nvme_io": false 00:14:47.729 }, 00:14:47.729 "memory_domains": [ 00:14:47.729 { 00:14:47.729 "dma_device_id": "system", 00:14:47.729 "dma_device_type": 1 00:14:47.729 }, 00:14:47.729 { 00:14:47.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.729 "dma_device_type": 2 00:14:47.729 } 00:14:47.729 ], 00:14:47.729 "driver_specific": {} 00:14:47.729 }' 00:14:47.729 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:47.729 23:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:48.004 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:48.004 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:48.004 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:48.004 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:48.004 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:48.004 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:48.004 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:48.004 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:48.268 [2024-05-14 23:30:11.532027] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.268 [2024-05-14 23:30:11.532071] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.268 [2024-05-14 23:30:11.532142] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.268 [2024-05-14 23:30:11.532476] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.268 [2024-05-14 23:30:11.532500] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 57418 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 57418 ']' 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 57418 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:48.268 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 57418 00:14:48.527 killing process with pid 57418 00:14:48.527 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:48.527 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:48.527 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 57418' 00:14:48.527 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 57418 00:14:48.527 23:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 57418 00:14:48.527 [2024-05-14 23:30:11.564909] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.527 [2024-05-14 23:30:11.812918] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.949 23:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:14:49.949 00:14:49.949 real 0m30.396s 00:14:49.949 user 0m57.326s 00:14:49.949 sys 0m3.053s 00:14:49.949 23:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.949 23:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.949 ************************************ 00:14:49.949 END TEST raid_state_function_test_sb 00:14:49.949 ************************************ 00:14:49.949 23:30:13 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:49.949 23:30:13 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:49.949 23:30:13 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.949 23:30:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.949 ************************************ 00:14:49.949 START TEST raid_superblock_test 00:14:49.949 ************************************ 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:49.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=58414 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 58414 /var/tmp/spdk-raid.sock 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 58414 ']' 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.949 23:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.949 [2024-05-14 23:30:13.208284] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:14:49.949 [2024-05-14 23:30:13.208467] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58414 ] 00:14:50.211 [2024-05-14 23:30:13.358368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.477 [2024-05-14 23:30:13.569451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.743 [2024-05-14 23:30:13.767123] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:50.743 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:51.011 malloc1 00:14:51.011 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:51.280 [2024-05-14 23:30:14.486243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:51.280 [2024-05-14 23:30:14.486347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.280 [2024-05-14 23:30:14.486430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:14:51.280 [2024-05-14 23:30:14.486480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.280 [2024-05-14 23:30:14.488218] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.280 [2024-05-14 23:30:14.488265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:51.280 pt1 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:51.280 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:51.551 malloc2 00:14:51.551 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.824 [2024-05-14 23:30:14.908938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.824 [2024-05-14 23:30:14.909032] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.824 [2024-05-14 23:30:14.909080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:14:51.824 [2024-05-14 23:30:14.909121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.824 [2024-05-14 23:30:14.911076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.824 [2024-05-14 23:30:14.911133] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.824 pt2 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:51.824 23:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:52.084 malloc3 00:14:52.084 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:52.084 [2024-05-14 23:30:15.335022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:52.084 [2024-05-14 23:30:15.335115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.084 [2024-05-14 23:30:15.335441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:14:52.084 [2024-05-14 23:30:15.335514] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.084 pt3 00:14:52.084 [2024-05-14 23:30:15.337163] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.084 [2024-05-14 23:30:15.337214] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:52.084 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:52.084 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:52.085 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:52.342 [2024-05-14 23:30:15.531186] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:52.342 [2024-05-14 23:30:15.532770] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:52.342 [2024-05-14 23:30:15.532824] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:52.342 [2024-05-14 23:30:15.532975] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:14:52.342 [2024-05-14 23:30:15.532993] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:52.342 [2024-05-14 23:30:15.533117] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:14:52.342 [2024-05-14 23:30:15.533418] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:14:52.342 [2024-05-14 23:30:15.533434] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:14:52.342 [2024-05-14 23:30:15.533546] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.342 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.599 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:52.599 "name": "raid_bdev1", 00:14:52.599 "uuid": "95b078f5-9468-4712-8ce3-b6228a941b46", 00:14:52.599 "strip_size_kb": 64, 00:14:52.599 "state": "online", 00:14:52.599 "raid_level": "raid0", 00:14:52.599 "superblock": true, 00:14:52.599 "num_base_bdevs": 3, 00:14:52.599 "num_base_bdevs_discovered": 3, 00:14:52.599 "num_base_bdevs_operational": 3, 00:14:52.599 "base_bdevs_list": [ 00:14:52.599 { 00:14:52.599 "name": "pt1", 00:14:52.599 "uuid": "e39c559f-7ab2-5cf6-b7e4-d44d1302c855", 00:14:52.599 "is_configured": true, 00:14:52.599 "data_offset": 2048, 00:14:52.599 "data_size": 63488 00:14:52.599 }, 00:14:52.599 { 00:14:52.599 "name": "pt2", 00:14:52.599 "uuid": "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03", 00:14:52.599 "is_configured": true, 00:14:52.599 "data_offset": 2048, 00:14:52.599 "data_size": 63488 00:14:52.599 }, 00:14:52.599 { 00:14:52.599 "name": "pt3", 00:14:52.599 "uuid": "95f1b1b0-28b5-52af-bb6e-197eeb5c530e", 00:14:52.599 "is_configured": true, 00:14:52.600 "data_offset": 2048, 00:14:52.600 "data_size": 63488 00:14:52.600 } 00:14:52.600 ] 00:14:52.600 }' 00:14:52.600 23:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:52.600 23:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.165 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:14:53.165 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:14:53.165 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:14:53.165 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:14:53.165 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:14:53.165 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:14:53.165 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:53.422 [2024-05-14 23:30:16.567373] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.422 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:14:53.422 "name": "raid_bdev1", 00:14:53.422 "aliases": [ 00:14:53.422 "95b078f5-9468-4712-8ce3-b6228a941b46" 00:14:53.422 ], 00:14:53.422 "product_name": "Raid Volume", 00:14:53.422 "block_size": 512, 00:14:53.422 "num_blocks": 190464, 00:14:53.422 "uuid": "95b078f5-9468-4712-8ce3-b6228a941b46", 00:14:53.422 "assigned_rate_limits": { 00:14:53.422 "rw_ios_per_sec": 0, 00:14:53.422 "rw_mbytes_per_sec": 0, 00:14:53.422 "r_mbytes_per_sec": 0, 00:14:53.422 "w_mbytes_per_sec": 0 00:14:53.422 }, 00:14:53.422 "claimed": false, 00:14:53.422 "zoned": false, 00:14:53.422 "supported_io_types": { 00:14:53.422 "read": true, 00:14:53.422 "write": true, 00:14:53.422 "unmap": true, 00:14:53.422 "write_zeroes": true, 00:14:53.422 "flush": true, 00:14:53.422 "reset": true, 00:14:53.422 "compare": false, 00:14:53.422 "compare_and_write": false, 00:14:53.422 "abort": false, 00:14:53.422 "nvme_admin": false, 00:14:53.422 "nvme_io": false 00:14:53.422 }, 00:14:53.422 "memory_domains": [ 00:14:53.422 { 00:14:53.422 "dma_device_id": "system", 00:14:53.422 "dma_device_type": 1 00:14:53.422 }, 00:14:53.422 { 00:14:53.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.422 "dma_device_type": 2 00:14:53.422 }, 00:14:53.422 { 00:14:53.422 "dma_device_id": "system", 00:14:53.422 "dma_device_type": 1 00:14:53.422 }, 00:14:53.422 { 00:14:53.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.422 "dma_device_type": 2 00:14:53.422 }, 00:14:53.422 { 00:14:53.422 "dma_device_id": "system", 00:14:53.422 "dma_device_type": 1 00:14:53.422 }, 00:14:53.422 { 00:14:53.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.422 "dma_device_type": 2 00:14:53.422 } 00:14:53.422 ], 00:14:53.422 "driver_specific": { 00:14:53.422 "raid": { 00:14:53.422 "uuid": "95b078f5-9468-4712-8ce3-b6228a941b46", 00:14:53.422 "strip_size_kb": 64, 00:14:53.422 "state": "online", 00:14:53.422 "raid_level": "raid0", 00:14:53.422 "superblock": true, 00:14:53.422 "num_base_bdevs": 3, 00:14:53.422 "num_base_bdevs_discovered": 3, 00:14:53.422 "num_base_bdevs_operational": 3, 00:14:53.422 "base_bdevs_list": [ 00:14:53.422 { 00:14:53.422 "name": "pt1", 00:14:53.422 "uuid": "e39c559f-7ab2-5cf6-b7e4-d44d1302c855", 00:14:53.422 "is_configured": true, 00:14:53.422 "data_offset": 2048, 00:14:53.422 "data_size": 63488 00:14:53.422 }, 00:14:53.422 { 00:14:53.422 "name": "pt2", 00:14:53.422 "uuid": "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03", 00:14:53.422 "is_configured": true, 00:14:53.422 "data_offset": 2048, 00:14:53.422 "data_size": 63488 00:14:53.422 }, 00:14:53.422 { 00:14:53.422 "name": "pt3", 00:14:53.422 "uuid": "95f1b1b0-28b5-52af-bb6e-197eeb5c530e", 00:14:53.422 "is_configured": true, 00:14:53.422 "data_offset": 2048, 00:14:53.422 "data_size": 63488 00:14:53.422 } 00:14:53.422 ] 00:14:53.422 } 00:14:53.422 } 00:14:53.422 }' 00:14:53.422 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.422 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:14:53.422 pt2 00:14:53.422 pt3' 00:14:53.422 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:53.422 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:53.422 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:53.681 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:53.681 "name": "pt1", 00:14:53.681 "aliases": [ 00:14:53.681 "e39c559f-7ab2-5cf6-b7e4-d44d1302c855" 00:14:53.681 ], 00:14:53.681 "product_name": "passthru", 00:14:53.681 "block_size": 512, 00:14:53.681 "num_blocks": 65536, 00:14:53.681 "uuid": "e39c559f-7ab2-5cf6-b7e4-d44d1302c855", 00:14:53.681 "assigned_rate_limits": { 00:14:53.681 "rw_ios_per_sec": 0, 00:14:53.681 "rw_mbytes_per_sec": 0, 00:14:53.681 "r_mbytes_per_sec": 0, 00:14:53.681 "w_mbytes_per_sec": 0 00:14:53.681 }, 00:14:53.681 "claimed": true, 00:14:53.681 "claim_type": "exclusive_write", 00:14:53.681 "zoned": false, 00:14:53.681 "supported_io_types": { 00:14:53.681 "read": true, 00:14:53.681 "write": true, 00:14:53.681 "unmap": true, 00:14:53.681 "write_zeroes": true, 00:14:53.681 "flush": true, 00:14:53.681 "reset": true, 00:14:53.681 "compare": false, 00:14:53.681 "compare_and_write": false, 00:14:53.681 "abort": true, 00:14:53.681 "nvme_admin": false, 00:14:53.681 "nvme_io": false 00:14:53.681 }, 00:14:53.681 "memory_domains": [ 00:14:53.681 { 00:14:53.681 "dma_device_id": "system", 00:14:53.681 "dma_device_type": 1 00:14:53.681 }, 00:14:53.681 { 00:14:53.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.681 "dma_device_type": 2 00:14:53.681 } 00:14:53.681 ], 00:14:53.681 "driver_specific": { 00:14:53.681 "passthru": { 00:14:53.681 "name": "pt1", 00:14:53.681 "base_bdev_name": "malloc1" 00:14:53.681 } 00:14:53.681 } 00:14:53.681 }' 00:14:53.681 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:53.681 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:53.681 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:53.681 23:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:53.939 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:53.939 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:53.939 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:53.939 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:53.939 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:53.939 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:53.939 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:54.197 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:54.197 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:54.198 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:54.198 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:54.198 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:54.198 "name": "pt2", 00:14:54.198 "aliases": [ 00:14:54.198 "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03" 00:14:54.198 ], 00:14:54.198 "product_name": "passthru", 00:14:54.198 "block_size": 512, 00:14:54.198 "num_blocks": 65536, 00:14:54.198 "uuid": "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03", 00:14:54.198 "assigned_rate_limits": { 00:14:54.198 "rw_ios_per_sec": 0, 00:14:54.198 "rw_mbytes_per_sec": 0, 00:14:54.198 "r_mbytes_per_sec": 0, 00:14:54.198 "w_mbytes_per_sec": 0 00:14:54.198 }, 00:14:54.198 "claimed": true, 00:14:54.198 "claim_type": "exclusive_write", 00:14:54.198 "zoned": false, 00:14:54.198 "supported_io_types": { 00:14:54.198 "read": true, 00:14:54.198 "write": true, 00:14:54.198 "unmap": true, 00:14:54.198 "write_zeroes": true, 00:14:54.198 "flush": true, 00:14:54.198 "reset": true, 00:14:54.198 "compare": false, 00:14:54.198 "compare_and_write": false, 00:14:54.198 "abort": true, 00:14:54.198 "nvme_admin": false, 00:14:54.198 "nvme_io": false 00:14:54.198 }, 00:14:54.198 "memory_domains": [ 00:14:54.198 { 00:14:54.198 "dma_device_id": "system", 00:14:54.198 "dma_device_type": 1 00:14:54.198 }, 00:14:54.198 { 00:14:54.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.198 "dma_device_type": 2 00:14:54.198 } 00:14:54.198 ], 00:14:54.198 "driver_specific": { 00:14:54.198 "passthru": { 00:14:54.198 "name": "pt2", 00:14:54.198 "base_bdev_name": "malloc2" 00:14:54.198 } 00:14:54.198 } 00:14:54.198 }' 00:14:54.198 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:54.456 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:54.456 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:54.456 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:54.456 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:54.456 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:54.456 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:54.714 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:54.714 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:54.714 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:54.714 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:54.714 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:54.714 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:14:54.714 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:54.714 23:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:14:54.971 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:14:54.971 "name": "pt3", 00:14:54.971 "aliases": [ 00:14:54.971 "95f1b1b0-28b5-52af-bb6e-197eeb5c530e" 00:14:54.971 ], 00:14:54.971 "product_name": "passthru", 00:14:54.971 "block_size": 512, 00:14:54.972 "num_blocks": 65536, 00:14:54.972 "uuid": "95f1b1b0-28b5-52af-bb6e-197eeb5c530e", 00:14:54.972 "assigned_rate_limits": { 00:14:54.972 "rw_ios_per_sec": 0, 00:14:54.972 "rw_mbytes_per_sec": 0, 00:14:54.972 "r_mbytes_per_sec": 0, 00:14:54.972 "w_mbytes_per_sec": 0 00:14:54.972 }, 00:14:54.972 "claimed": true, 00:14:54.972 "claim_type": "exclusive_write", 00:14:54.972 "zoned": false, 00:14:54.972 "supported_io_types": { 00:14:54.972 "read": true, 00:14:54.972 "write": true, 00:14:54.972 "unmap": true, 00:14:54.972 "write_zeroes": true, 00:14:54.972 "flush": true, 00:14:54.972 "reset": true, 00:14:54.972 "compare": false, 00:14:54.972 "compare_and_write": false, 00:14:54.972 "abort": true, 00:14:54.972 "nvme_admin": false, 00:14:54.972 "nvme_io": false 00:14:54.972 }, 00:14:54.972 "memory_domains": [ 00:14:54.972 { 00:14:54.972 "dma_device_id": "system", 00:14:54.972 "dma_device_type": 1 00:14:54.972 }, 00:14:54.972 { 00:14:54.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.972 "dma_device_type": 2 00:14:54.972 } 00:14:54.972 ], 00:14:54.972 "driver_specific": { 00:14:54.972 "passthru": { 00:14:54.972 "name": "pt3", 00:14:54.972 "base_bdev_name": "malloc3" 00:14:54.972 } 00:14:54.972 } 00:14:54.972 }' 00:14:54.972 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:54.972 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:14:54.972 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:14:54.972 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:55.231 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:14:55.231 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:55.231 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:55.231 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:14:55.231 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:55.231 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:55.488 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:14:55.488 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:14:55.488 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:55.488 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:55.488 [2024-05-14 23:30:18.771646] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.745 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=95b078f5-9468-4712-8ce3-b6228a941b46 00:14:55.745 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 95b078f5-9468-4712-8ce3-b6228a941b46 ']' 00:14:55.745 23:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:55.745 [2024-05-14 23:30:19.031530] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.745 [2024-05-14 23:30:19.031571] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.745 [2024-05-14 23:30:19.031647] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.745 [2024-05-14 23:30:19.031691] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.745 [2024-05-14 23:30:19.031702] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:14:56.004 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.004 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:56.004 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:56.004 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:56.004 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:56.004 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:56.262 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:56.262 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:56.523 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:56.523 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:56.784 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:56.784 23:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:57.049 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:57.317 [2024-05-14 23:30:20.387728] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:57.317 [2024-05-14 23:30:20.389370] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:57.317 [2024-05-14 23:30:20.389421] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:57.317 [2024-05-14 23:30:20.389464] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:57.317 [2024-05-14 23:30:20.389533] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:57.317 [2024-05-14 23:30:20.389569] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:57.317 [2024-05-14 23:30:20.389629] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.317 [2024-05-14 23:30:20.389642] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:14:57.317 request: 00:14:57.317 { 00:14:57.317 "name": "raid_bdev1", 00:14:57.317 "raid_level": "raid0", 00:14:57.317 "base_bdevs": [ 00:14:57.317 "malloc1", 00:14:57.317 "malloc2", 00:14:57.317 "malloc3" 00:14:57.317 ], 00:14:57.317 "superblock": false, 00:14:57.317 "strip_size_kb": 64, 00:14:57.317 "method": "bdev_raid_create", 00:14:57.317 "req_id": 1 00:14:57.317 } 00:14:57.317 Got JSON-RPC error response 00:14:57.317 response: 00:14:57.317 { 00:14:57.317 "code": -17, 00:14:57.317 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:57.317 } 00:14:57.317 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:57.317 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:57.317 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:57.317 23:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:57.317 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:57.317 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.578 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:57.578 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:57.578 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.836 [2024-05-14 23:30:20.883880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.836 [2024-05-14 23:30:20.883965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.836 [2024-05-14 23:30:20.884014] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d680 00:14:57.836 [2024-05-14 23:30:20.884044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.836 [2024-05-14 23:30:20.886876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.836 [2024-05-14 23:30:20.886931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.837 [2024-05-14 23:30:20.887044] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:57.837 [2024-05-14 23:30:20.887107] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.837 pt1 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.837 23:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.095 23:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.095 "name": "raid_bdev1", 00:14:58.095 "uuid": "95b078f5-9468-4712-8ce3-b6228a941b46", 00:14:58.095 "strip_size_kb": 64, 00:14:58.095 "state": "configuring", 00:14:58.095 "raid_level": "raid0", 00:14:58.095 "superblock": true, 00:14:58.095 "num_base_bdevs": 3, 00:14:58.095 "num_base_bdevs_discovered": 1, 00:14:58.095 "num_base_bdevs_operational": 3, 00:14:58.095 "base_bdevs_list": [ 00:14:58.095 { 00:14:58.095 "name": "pt1", 00:14:58.095 "uuid": "e39c559f-7ab2-5cf6-b7e4-d44d1302c855", 00:14:58.095 "is_configured": true, 00:14:58.095 "data_offset": 2048, 00:14:58.095 "data_size": 63488 00:14:58.095 }, 00:14:58.095 { 00:14:58.095 "name": null, 00:14:58.095 "uuid": "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03", 00:14:58.095 "is_configured": false, 00:14:58.095 "data_offset": 2048, 00:14:58.095 "data_size": 63488 00:14:58.095 }, 00:14:58.096 { 00:14:58.096 "name": null, 00:14:58.096 "uuid": "95f1b1b0-28b5-52af-bb6e-197eeb5c530e", 00:14:58.096 "is_configured": false, 00:14:58.096 "data_offset": 2048, 00:14:58.096 "data_size": 63488 00:14:58.096 } 00:14:58.096 ] 00:14:58.096 }' 00:14:58.096 23:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.096 23:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.662 23:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:58.662 23:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.921 [2024-05-14 23:30:22.012053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.921 [2024-05-14 23:30:22.012398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.921 [2024-05-14 23:30:22.012463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ee80 00:14:58.921 [2024-05-14 23:30:22.012488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.921 [2024-05-14 23:30:22.012841] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.921 [2024-05-14 23:30:22.012877] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.921 [2024-05-14 23:30:22.012992] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:58.921 [2024-05-14 23:30:22.013025] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.921 pt2 00:14:58.921 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:58.921 [2024-05-14 23:30:22.204072] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.180 "name": "raid_bdev1", 00:14:59.180 "uuid": "95b078f5-9468-4712-8ce3-b6228a941b46", 00:14:59.180 "strip_size_kb": 64, 00:14:59.180 "state": "configuring", 00:14:59.180 "raid_level": "raid0", 00:14:59.180 "superblock": true, 00:14:59.180 "num_base_bdevs": 3, 00:14:59.180 "num_base_bdevs_discovered": 1, 00:14:59.180 "num_base_bdevs_operational": 3, 00:14:59.180 "base_bdevs_list": [ 00:14:59.180 { 00:14:59.180 "name": "pt1", 00:14:59.180 "uuid": "e39c559f-7ab2-5cf6-b7e4-d44d1302c855", 00:14:59.180 "is_configured": true, 00:14:59.180 "data_offset": 2048, 00:14:59.180 "data_size": 63488 00:14:59.180 }, 00:14:59.180 { 00:14:59.180 "name": null, 00:14:59.180 "uuid": "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03", 00:14:59.180 "is_configured": false, 00:14:59.180 "data_offset": 2048, 00:14:59.180 "data_size": 63488 00:14:59.180 }, 00:14:59.180 { 00:14:59.180 "name": null, 00:14:59.180 "uuid": "95f1b1b0-28b5-52af-bb6e-197eeb5c530e", 00:14:59.180 "is_configured": false, 00:14:59.180 "data_offset": 2048, 00:14:59.180 "data_size": 63488 00:14:59.180 } 00:14:59.180 ] 00:14:59.180 }' 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.180 23:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.117 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:00.117 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:00.117 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.117 [2024-05-14 23:30:23.380196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.117 [2024-05-14 23:30:23.380289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.117 [2024-05-14 23:30:23.380339] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030680 00:15:00.117 [2024-05-14 23:30:23.380369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.117 [2024-05-14 23:30:23.380942] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.117 [2024-05-14 23:30:23.380984] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.117 [2024-05-14 23:30:23.381294] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:00.117 [2024-05-14 23:30:23.381327] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.117 pt2 00:15:00.117 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:00.117 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:00.117 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:00.393 [2024-05-14 23:30:23.608226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:00.393 [2024-05-14 23:30:23.608319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.393 [2024-05-14 23:30:23.608365] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031b80 00:15:00.393 [2024-05-14 23:30:23.608394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.393 [2024-05-14 23:30:23.608737] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.393 [2024-05-14 23:30:23.608777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:00.393 [2024-05-14 23:30:23.608876] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:00.393 [2024-05-14 23:30:23.608901] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:00.394 [2024-05-14 23:30:23.608983] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:15:00.394 [2024-05-14 23:30:23.608996] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:00.394 [2024-05-14 23:30:23.609079] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:00.394 [2024-05-14 23:30:23.609301] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:15:00.394 [2024-05-14 23:30:23.609317] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:15:00.394 [2024-05-14 23:30:23.609413] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.394 pt3 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.394 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.653 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.653 "name": "raid_bdev1", 00:15:00.653 "uuid": "95b078f5-9468-4712-8ce3-b6228a941b46", 00:15:00.653 "strip_size_kb": 64, 00:15:00.653 "state": "online", 00:15:00.653 "raid_level": "raid0", 00:15:00.653 "superblock": true, 00:15:00.653 "num_base_bdevs": 3, 00:15:00.653 "num_base_bdevs_discovered": 3, 00:15:00.653 "num_base_bdevs_operational": 3, 00:15:00.653 "base_bdevs_list": [ 00:15:00.653 { 00:15:00.653 "name": "pt1", 00:15:00.653 "uuid": "e39c559f-7ab2-5cf6-b7e4-d44d1302c855", 00:15:00.653 "is_configured": true, 00:15:00.653 "data_offset": 2048, 00:15:00.653 "data_size": 63488 00:15:00.653 }, 00:15:00.653 { 00:15:00.653 "name": "pt2", 00:15:00.653 "uuid": "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03", 00:15:00.653 "is_configured": true, 00:15:00.653 "data_offset": 2048, 00:15:00.653 "data_size": 63488 00:15:00.653 }, 00:15:00.653 { 00:15:00.653 "name": "pt3", 00:15:00.653 "uuid": "95f1b1b0-28b5-52af-bb6e-197eeb5c530e", 00:15:00.653 "is_configured": true, 00:15:00.653 "data_offset": 2048, 00:15:00.653 "data_size": 63488 00:15:00.653 } 00:15:00.653 ] 00:15:00.653 }' 00:15:00.653 23:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.653 23:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:01.586 [2024-05-14 23:30:24.756561] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:01.586 "name": "raid_bdev1", 00:15:01.586 "aliases": [ 00:15:01.586 "95b078f5-9468-4712-8ce3-b6228a941b46" 00:15:01.586 ], 00:15:01.586 "product_name": "Raid Volume", 00:15:01.586 "block_size": 512, 00:15:01.586 "num_blocks": 190464, 00:15:01.586 "uuid": "95b078f5-9468-4712-8ce3-b6228a941b46", 00:15:01.586 "assigned_rate_limits": { 00:15:01.586 "rw_ios_per_sec": 0, 00:15:01.586 "rw_mbytes_per_sec": 0, 00:15:01.586 "r_mbytes_per_sec": 0, 00:15:01.586 "w_mbytes_per_sec": 0 00:15:01.586 }, 00:15:01.586 "claimed": false, 00:15:01.586 "zoned": false, 00:15:01.586 "supported_io_types": { 00:15:01.586 "read": true, 00:15:01.586 "write": true, 00:15:01.586 "unmap": true, 00:15:01.586 "write_zeroes": true, 00:15:01.586 "flush": true, 00:15:01.586 "reset": true, 00:15:01.586 "compare": false, 00:15:01.586 "compare_and_write": false, 00:15:01.586 "abort": false, 00:15:01.586 "nvme_admin": false, 00:15:01.586 "nvme_io": false 00:15:01.586 }, 00:15:01.586 "memory_domains": [ 00:15:01.586 { 00:15:01.586 "dma_device_id": "system", 00:15:01.586 "dma_device_type": 1 00:15:01.586 }, 00:15:01.586 { 00:15:01.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.586 "dma_device_type": 2 00:15:01.586 }, 00:15:01.586 { 00:15:01.586 "dma_device_id": "system", 00:15:01.586 "dma_device_type": 1 00:15:01.586 }, 00:15:01.586 { 00:15:01.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.586 "dma_device_type": 2 00:15:01.586 }, 00:15:01.586 { 00:15:01.586 "dma_device_id": "system", 00:15:01.586 "dma_device_type": 1 00:15:01.586 }, 00:15:01.586 { 00:15:01.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.586 "dma_device_type": 2 00:15:01.586 } 00:15:01.586 ], 00:15:01.586 "driver_specific": { 00:15:01.586 "raid": { 00:15:01.586 "uuid": "95b078f5-9468-4712-8ce3-b6228a941b46", 00:15:01.586 "strip_size_kb": 64, 00:15:01.586 "state": "online", 00:15:01.586 "raid_level": "raid0", 00:15:01.586 "superblock": true, 00:15:01.586 "num_base_bdevs": 3, 00:15:01.586 "num_base_bdevs_discovered": 3, 00:15:01.586 "num_base_bdevs_operational": 3, 00:15:01.586 "base_bdevs_list": [ 00:15:01.586 { 00:15:01.586 "name": "pt1", 00:15:01.586 "uuid": "e39c559f-7ab2-5cf6-b7e4-d44d1302c855", 00:15:01.586 "is_configured": true, 00:15:01.586 "data_offset": 2048, 00:15:01.586 "data_size": 63488 00:15:01.586 }, 00:15:01.586 { 00:15:01.586 "name": "pt2", 00:15:01.586 "uuid": "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03", 00:15:01.586 "is_configured": true, 00:15:01.586 "data_offset": 2048, 00:15:01.586 "data_size": 63488 00:15:01.586 }, 00:15:01.586 { 00:15:01.586 "name": "pt3", 00:15:01.586 "uuid": "95f1b1b0-28b5-52af-bb6e-197eeb5c530e", 00:15:01.586 "is_configured": true, 00:15:01.586 "data_offset": 2048, 00:15:01.586 "data_size": 63488 00:15:01.586 } 00:15:01.586 ] 00:15:01.586 } 00:15:01.586 } 00:15:01.586 }' 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:15:01.586 pt2 00:15:01.586 pt3' 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:01.586 23:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:01.844 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:01.844 "name": "pt1", 00:15:01.844 "aliases": [ 00:15:01.844 "e39c559f-7ab2-5cf6-b7e4-d44d1302c855" 00:15:01.844 ], 00:15:01.844 "product_name": "passthru", 00:15:01.844 "block_size": 512, 00:15:01.844 "num_blocks": 65536, 00:15:01.844 "uuid": "e39c559f-7ab2-5cf6-b7e4-d44d1302c855", 00:15:01.844 "assigned_rate_limits": { 00:15:01.844 "rw_ios_per_sec": 0, 00:15:01.844 "rw_mbytes_per_sec": 0, 00:15:01.844 "r_mbytes_per_sec": 0, 00:15:01.844 "w_mbytes_per_sec": 0 00:15:01.844 }, 00:15:01.844 "claimed": true, 00:15:01.844 "claim_type": "exclusive_write", 00:15:01.844 "zoned": false, 00:15:01.844 "supported_io_types": { 00:15:01.844 "read": true, 00:15:01.844 "write": true, 00:15:01.844 "unmap": true, 00:15:01.844 "write_zeroes": true, 00:15:01.844 "flush": true, 00:15:01.844 "reset": true, 00:15:01.844 "compare": false, 00:15:01.844 "compare_and_write": false, 00:15:01.844 "abort": true, 00:15:01.844 "nvme_admin": false, 00:15:01.844 "nvme_io": false 00:15:01.844 }, 00:15:01.844 "memory_domains": [ 00:15:01.844 { 00:15:01.844 "dma_device_id": "system", 00:15:01.844 "dma_device_type": 1 00:15:01.844 }, 00:15:01.844 { 00:15:01.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.844 "dma_device_type": 2 00:15:01.844 } 00:15:01.844 ], 00:15:01.844 "driver_specific": { 00:15:01.844 "passthru": { 00:15:01.844 "name": "pt1", 00:15:01.844 "base_bdev_name": "malloc1" 00:15:01.844 } 00:15:01.844 } 00:15:01.844 }' 00:15:01.844 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:01.844 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:02.102 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:02.102 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:02.102 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:02.102 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.102 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:02.102 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:02.383 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.383 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:02.383 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:02.383 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:02.383 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:02.383 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:02.383 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:02.641 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:02.641 "name": "pt2", 00:15:02.641 "aliases": [ 00:15:02.641 "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03" 00:15:02.641 ], 00:15:02.641 "product_name": "passthru", 00:15:02.641 "block_size": 512, 00:15:02.641 "num_blocks": 65536, 00:15:02.641 "uuid": "6d3bcf20-6aa0-5e20-942c-5ff02bec6a03", 00:15:02.641 "assigned_rate_limits": { 00:15:02.641 "rw_ios_per_sec": 0, 00:15:02.641 "rw_mbytes_per_sec": 0, 00:15:02.641 "r_mbytes_per_sec": 0, 00:15:02.641 "w_mbytes_per_sec": 0 00:15:02.641 }, 00:15:02.641 "claimed": true, 00:15:02.641 "claim_type": "exclusive_write", 00:15:02.641 "zoned": false, 00:15:02.641 "supported_io_types": { 00:15:02.641 "read": true, 00:15:02.641 "write": true, 00:15:02.641 "unmap": true, 00:15:02.641 "write_zeroes": true, 00:15:02.641 "flush": true, 00:15:02.641 "reset": true, 00:15:02.641 "compare": false, 00:15:02.641 "compare_and_write": false, 00:15:02.641 "abort": true, 00:15:02.641 "nvme_admin": false, 00:15:02.641 "nvme_io": false 00:15:02.641 }, 00:15:02.641 "memory_domains": [ 00:15:02.641 { 00:15:02.641 "dma_device_id": "system", 00:15:02.641 "dma_device_type": 1 00:15:02.641 }, 00:15:02.641 { 00:15:02.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.641 "dma_device_type": 2 00:15:02.641 } 00:15:02.641 ], 00:15:02.641 "driver_specific": { 00:15:02.641 "passthru": { 00:15:02.641 "name": "pt2", 00:15:02.641 "base_bdev_name": "malloc2" 00:15:02.642 } 00:15:02.642 } 00:15:02.642 }' 00:15:02.642 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:02.642 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:02.642 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:02.642 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:02.900 23:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:02.900 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.900 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:02.900 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:02.900 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.900 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:03.159 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:03.159 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:03.159 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:03.159 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:03.159 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:03.418 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:03.418 "name": "pt3", 00:15:03.418 "aliases": [ 00:15:03.418 "95f1b1b0-28b5-52af-bb6e-197eeb5c530e" 00:15:03.418 ], 00:15:03.418 "product_name": "passthru", 00:15:03.418 "block_size": 512, 00:15:03.418 "num_blocks": 65536, 00:15:03.418 "uuid": "95f1b1b0-28b5-52af-bb6e-197eeb5c530e", 00:15:03.418 "assigned_rate_limits": { 00:15:03.418 "rw_ios_per_sec": 0, 00:15:03.418 "rw_mbytes_per_sec": 0, 00:15:03.418 "r_mbytes_per_sec": 0, 00:15:03.418 "w_mbytes_per_sec": 0 00:15:03.418 }, 00:15:03.418 "claimed": true, 00:15:03.418 "claim_type": "exclusive_write", 00:15:03.418 "zoned": false, 00:15:03.418 "supported_io_types": { 00:15:03.418 "read": true, 00:15:03.418 "write": true, 00:15:03.418 "unmap": true, 00:15:03.418 "write_zeroes": true, 00:15:03.418 "flush": true, 00:15:03.418 "reset": true, 00:15:03.418 "compare": false, 00:15:03.418 "compare_and_write": false, 00:15:03.418 "abort": true, 00:15:03.418 "nvme_admin": false, 00:15:03.418 "nvme_io": false 00:15:03.418 }, 00:15:03.418 "memory_domains": [ 00:15:03.418 { 00:15:03.418 "dma_device_id": "system", 00:15:03.418 "dma_device_type": 1 00:15:03.418 }, 00:15:03.418 { 00:15:03.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.418 "dma_device_type": 2 00:15:03.418 } 00:15:03.418 ], 00:15:03.418 "driver_specific": { 00:15:03.418 "passthru": { 00:15:03.418 "name": "pt3", 00:15:03.418 "base_bdev_name": "malloc3" 00:15:03.418 } 00:15:03.418 } 00:15:03.418 }' 00:15:03.418 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:03.418 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:03.418 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:03.418 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:03.418 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:03.677 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.677 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:03.677 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:03.677 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.677 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:03.677 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:03.936 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:03.936 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:03.936 23:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:03.936 [2024-05-14 23:30:27.152814] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 95b078f5-9468-4712-8ce3-b6228a941b46 '!=' 95b078f5-9468-4712-8ce3-b6228a941b46 ']' 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 58414 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 58414 ']' 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 58414 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58414 00:15:03.936 killing process with pid 58414 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58414' 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 58414 00:15:03.936 23:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 58414 00:15:03.936 [2024-05-14 23:30:27.189176] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.936 [2024-05-14 23:30:27.189262] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.936 [2024-05-14 23:30:27.189303] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.936 [2024-05-14 23:30:27.189314] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:15:04.195 [2024-05-14 23:30:27.430941] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.574 ************************************ 00:15:05.574 END TEST raid_superblock_test 00:15:05.574 ************************************ 00:15:05.574 23:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:15:05.574 00:15:05.574 real 0m15.553s 00:15:05.574 user 0m28.199s 00:15:05.574 sys 0m1.613s 00:15:05.574 23:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:05.574 23:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.574 23:30:28 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:15:05.574 23:30:28 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:05.574 23:30:28 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:05.574 23:30:28 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:05.574 23:30:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.574 ************************************ 00:15:05.574 START TEST raid_state_function_test 00:15:05.574 ************************************ 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:05.574 Process raid pid: 58905 00:15:05.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=58905 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 58905' 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 58905 /var/tmp/spdk-raid.sock 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 58905 ']' 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:05.574 23:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.574 [2024-05-14 23:30:28.824737] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:15:05.574 [2024-05-14 23:30:28.824961] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.841 [2024-05-14 23:30:28.995789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.101 [2024-05-14 23:30:29.289488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.360 [2024-05-14 23:30:29.532213] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.619 23:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:06.619 23:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:15:06.619 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:06.878 [2024-05-14 23:30:29.980418] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.878 [2024-05-14 23:30:29.980533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.878 [2024-05-14 23:30:29.980558] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.878 [2024-05-14 23:30:29.980591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.878 [2024-05-14 23:30:29.980608] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.878 [2024-05-14 23:30:29.980679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.878 23:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.137 23:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:07.137 "name": "Existed_Raid", 00:15:07.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.137 "strip_size_kb": 64, 00:15:07.137 "state": "configuring", 00:15:07.137 "raid_level": "concat", 00:15:07.137 "superblock": false, 00:15:07.137 "num_base_bdevs": 3, 00:15:07.137 "num_base_bdevs_discovered": 0, 00:15:07.137 "num_base_bdevs_operational": 3, 00:15:07.137 "base_bdevs_list": [ 00:15:07.137 { 00:15:07.137 "name": "BaseBdev1", 00:15:07.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.137 "is_configured": false, 00:15:07.137 "data_offset": 0, 00:15:07.137 "data_size": 0 00:15:07.137 }, 00:15:07.137 { 00:15:07.137 "name": "BaseBdev2", 00:15:07.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.137 "is_configured": false, 00:15:07.137 "data_offset": 0, 00:15:07.137 "data_size": 0 00:15:07.137 }, 00:15:07.137 { 00:15:07.137 "name": "BaseBdev3", 00:15:07.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.137 "is_configured": false, 00:15:07.137 "data_offset": 0, 00:15:07.137 "data_size": 0 00:15:07.137 } 00:15:07.137 ] 00:15:07.137 }' 00:15:07.137 23:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:07.137 23:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.705 23:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:07.964 [2024-05-14 23:30:31.024376] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.964 [2024-05-14 23:30:31.024427] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:15:07.964 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:07.964 [2024-05-14 23:30:31.212424] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.964 [2024-05-14 23:30:31.212505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.964 [2024-05-14 23:30:31.212521] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.964 [2024-05-14 23:30:31.212555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.964 [2024-05-14 23:30:31.212565] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:07.964 [2024-05-14 23:30:31.212591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:07.964 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.222 [2024-05-14 23:30:31.440754] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.222 BaseBdev1 00:15:08.222 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:15:08.222 23:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:08.222 23:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:08.222 23:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:08.222 23:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:08.222 23:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:08.222 23:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.481 23:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:08.740 [ 00:15:08.740 { 00:15:08.740 "name": "BaseBdev1", 00:15:08.740 "aliases": [ 00:15:08.740 "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2" 00:15:08.740 ], 00:15:08.740 "product_name": "Malloc disk", 00:15:08.740 "block_size": 512, 00:15:08.740 "num_blocks": 65536, 00:15:08.740 "uuid": "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2", 00:15:08.740 "assigned_rate_limits": { 00:15:08.740 "rw_ios_per_sec": 0, 00:15:08.740 "rw_mbytes_per_sec": 0, 00:15:08.740 "r_mbytes_per_sec": 0, 00:15:08.740 "w_mbytes_per_sec": 0 00:15:08.740 }, 00:15:08.740 "claimed": true, 00:15:08.740 "claim_type": "exclusive_write", 00:15:08.740 "zoned": false, 00:15:08.740 "supported_io_types": { 00:15:08.740 "read": true, 00:15:08.740 "write": true, 00:15:08.740 "unmap": true, 00:15:08.740 "write_zeroes": true, 00:15:08.740 "flush": true, 00:15:08.740 "reset": true, 00:15:08.740 "compare": false, 00:15:08.740 "compare_and_write": false, 00:15:08.740 "abort": true, 00:15:08.740 "nvme_admin": false, 00:15:08.740 "nvme_io": false 00:15:08.740 }, 00:15:08.740 "memory_domains": [ 00:15:08.740 { 00:15:08.740 "dma_device_id": "system", 00:15:08.740 "dma_device_type": 1 00:15:08.740 }, 00:15:08.740 { 00:15:08.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.740 "dma_device_type": 2 00:15:08.740 } 00:15:08.740 ], 00:15:08.740 "driver_specific": {} 00:15:08.740 } 00:15:08.740 ] 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.740 23:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.999 23:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.999 "name": "Existed_Raid", 00:15:08.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.999 "strip_size_kb": 64, 00:15:08.999 "state": "configuring", 00:15:08.999 "raid_level": "concat", 00:15:08.999 "superblock": false, 00:15:08.999 "num_base_bdevs": 3, 00:15:08.999 "num_base_bdevs_discovered": 1, 00:15:08.999 "num_base_bdevs_operational": 3, 00:15:08.999 "base_bdevs_list": [ 00:15:08.999 { 00:15:08.999 "name": "BaseBdev1", 00:15:08.999 "uuid": "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2", 00:15:08.999 "is_configured": true, 00:15:08.999 "data_offset": 0, 00:15:08.999 "data_size": 65536 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "name": "BaseBdev2", 00:15:08.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.999 "is_configured": false, 00:15:08.999 "data_offset": 0, 00:15:08.999 "data_size": 0 00:15:08.999 }, 00:15:08.999 { 00:15:08.999 "name": "BaseBdev3", 00:15:08.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.999 "is_configured": false, 00:15:08.999 "data_offset": 0, 00:15:08.999 "data_size": 0 00:15:08.999 } 00:15:08.999 ] 00:15:08.999 }' 00:15:08.999 23:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.999 23:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.565 23:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:09.824 [2024-05-14 23:30:32.880999] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.824 [2024-05-14 23:30:32.881059] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:15:09.824 23:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:09.824 [2024-05-14 23:30:33.081062] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.824 [2024-05-14 23:30:33.082661] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.824 [2024-05-14 23:30:33.082732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.824 [2024-05-14 23:30:33.082746] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:09.824 [2024-05-14 23:30:33.082780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.824 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.082 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.082 "name": "Existed_Raid", 00:15:10.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.082 "strip_size_kb": 64, 00:15:10.082 "state": "configuring", 00:15:10.082 "raid_level": "concat", 00:15:10.082 "superblock": false, 00:15:10.082 "num_base_bdevs": 3, 00:15:10.082 "num_base_bdevs_discovered": 1, 00:15:10.082 "num_base_bdevs_operational": 3, 00:15:10.082 "base_bdevs_list": [ 00:15:10.082 { 00:15:10.082 "name": "BaseBdev1", 00:15:10.082 "uuid": "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2", 00:15:10.082 "is_configured": true, 00:15:10.082 "data_offset": 0, 00:15:10.082 "data_size": 65536 00:15:10.082 }, 00:15:10.082 { 00:15:10.082 "name": "BaseBdev2", 00:15:10.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.082 "is_configured": false, 00:15:10.082 "data_offset": 0, 00:15:10.082 "data_size": 0 00:15:10.082 }, 00:15:10.082 { 00:15:10.082 "name": "BaseBdev3", 00:15:10.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.082 "is_configured": false, 00:15:10.082 "data_offset": 0, 00:15:10.082 "data_size": 0 00:15:10.082 } 00:15:10.082 ] 00:15:10.082 }' 00:15:10.082 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.082 23:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.027 23:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.027 BaseBdev2 00:15:11.027 [2024-05-14 23:30:34.182391] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.027 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:15:11.027 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:11.027 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:11.027 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:11.027 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:11.027 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:11.027 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.285 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:11.551 [ 00:15:11.551 { 00:15:11.551 "name": "BaseBdev2", 00:15:11.551 "aliases": [ 00:15:11.551 "0bca4d65-c50e-4162-ba61-af4bf2c5aeb3" 00:15:11.551 ], 00:15:11.551 "product_name": "Malloc disk", 00:15:11.551 "block_size": 512, 00:15:11.551 "num_blocks": 65536, 00:15:11.551 "uuid": "0bca4d65-c50e-4162-ba61-af4bf2c5aeb3", 00:15:11.551 "assigned_rate_limits": { 00:15:11.551 "rw_ios_per_sec": 0, 00:15:11.551 "rw_mbytes_per_sec": 0, 00:15:11.551 "r_mbytes_per_sec": 0, 00:15:11.551 "w_mbytes_per_sec": 0 00:15:11.551 }, 00:15:11.551 "claimed": true, 00:15:11.551 "claim_type": "exclusive_write", 00:15:11.551 "zoned": false, 00:15:11.552 "supported_io_types": { 00:15:11.552 "read": true, 00:15:11.552 "write": true, 00:15:11.552 "unmap": true, 00:15:11.552 "write_zeroes": true, 00:15:11.552 "flush": true, 00:15:11.552 "reset": true, 00:15:11.552 "compare": false, 00:15:11.552 "compare_and_write": false, 00:15:11.552 "abort": true, 00:15:11.552 "nvme_admin": false, 00:15:11.552 "nvme_io": false 00:15:11.552 }, 00:15:11.552 "memory_domains": [ 00:15:11.552 { 00:15:11.552 "dma_device_id": "system", 00:15:11.552 "dma_device_type": 1 00:15:11.552 }, 00:15:11.552 { 00:15:11.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.552 "dma_device_type": 2 00:15:11.552 } 00:15:11.552 ], 00:15:11.552 "driver_specific": {} 00:15:11.552 } 00:15:11.552 ] 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.552 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.811 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.811 "name": "Existed_Raid", 00:15:11.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.811 "strip_size_kb": 64, 00:15:11.811 "state": "configuring", 00:15:11.811 "raid_level": "concat", 00:15:11.811 "superblock": false, 00:15:11.811 "num_base_bdevs": 3, 00:15:11.811 "num_base_bdevs_discovered": 2, 00:15:11.811 "num_base_bdevs_operational": 3, 00:15:11.811 "base_bdevs_list": [ 00:15:11.811 { 00:15:11.811 "name": "BaseBdev1", 00:15:11.811 "uuid": "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2", 00:15:11.811 "is_configured": true, 00:15:11.811 "data_offset": 0, 00:15:11.811 "data_size": 65536 00:15:11.811 }, 00:15:11.811 { 00:15:11.811 "name": "BaseBdev2", 00:15:11.811 "uuid": "0bca4d65-c50e-4162-ba61-af4bf2c5aeb3", 00:15:11.811 "is_configured": true, 00:15:11.811 "data_offset": 0, 00:15:11.811 "data_size": 65536 00:15:11.811 }, 00:15:11.811 { 00:15:11.811 "name": "BaseBdev3", 00:15:11.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.811 "is_configured": false, 00:15:11.811 "data_offset": 0, 00:15:11.811 "data_size": 0 00:15:11.811 } 00:15:11.811 ] 00:15:11.811 }' 00:15:11.811 23:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.811 23:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.377 23:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:12.636 [2024-05-14 23:30:35.859632] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.636 [2024-05-14 23:30:35.859680] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:15:12.636 [2024-05-14 23:30:35.859690] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:12.636 [2024-05-14 23:30:35.859795] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:12.636 [2024-05-14 23:30:35.860060] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:15:12.636 [2024-05-14 23:30:35.860076] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:15:12.636 BaseBdev3 00:15:12.636 [2024-05-14 23:30:35.860535] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.636 23:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:15:12.636 23:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:12.636 23:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:12.636 23:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:12.636 23:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:12.636 23:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:12.636 23:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.894 23:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:13.152 [ 00:15:13.152 { 00:15:13.152 "name": "BaseBdev3", 00:15:13.152 "aliases": [ 00:15:13.152 "3ecbbe40-a1b3-4a81-97ee-834e406a3b36" 00:15:13.152 ], 00:15:13.152 "product_name": "Malloc disk", 00:15:13.152 "block_size": 512, 00:15:13.152 "num_blocks": 65536, 00:15:13.152 "uuid": "3ecbbe40-a1b3-4a81-97ee-834e406a3b36", 00:15:13.152 "assigned_rate_limits": { 00:15:13.152 "rw_ios_per_sec": 0, 00:15:13.152 "rw_mbytes_per_sec": 0, 00:15:13.152 "r_mbytes_per_sec": 0, 00:15:13.152 "w_mbytes_per_sec": 0 00:15:13.152 }, 00:15:13.152 "claimed": true, 00:15:13.152 "claim_type": "exclusive_write", 00:15:13.152 "zoned": false, 00:15:13.152 "supported_io_types": { 00:15:13.152 "read": true, 00:15:13.152 "write": true, 00:15:13.152 "unmap": true, 00:15:13.152 "write_zeroes": true, 00:15:13.152 "flush": true, 00:15:13.152 "reset": true, 00:15:13.152 "compare": false, 00:15:13.152 "compare_and_write": false, 00:15:13.152 "abort": true, 00:15:13.152 "nvme_admin": false, 00:15:13.152 "nvme_io": false 00:15:13.152 }, 00:15:13.152 "memory_domains": [ 00:15:13.152 { 00:15:13.152 "dma_device_id": "system", 00:15:13.152 "dma_device_type": 1 00:15:13.152 }, 00:15:13.152 { 00:15:13.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.152 "dma_device_type": 2 00:15:13.152 } 00:15:13.152 ], 00:15:13.152 "driver_specific": {} 00:15:13.152 } 00:15:13.152 ] 00:15:13.152 23:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:13.152 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:13.152 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:13.152 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:13.152 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.153 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.411 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.411 "name": "Existed_Raid", 00:15:13.411 "uuid": "dc8fddd8-8f33-44ca-81f0-2f7695a54c66", 00:15:13.411 "strip_size_kb": 64, 00:15:13.411 "state": "online", 00:15:13.411 "raid_level": "concat", 00:15:13.411 "superblock": false, 00:15:13.411 "num_base_bdevs": 3, 00:15:13.411 "num_base_bdevs_discovered": 3, 00:15:13.412 "num_base_bdevs_operational": 3, 00:15:13.412 "base_bdevs_list": [ 00:15:13.412 { 00:15:13.412 "name": "BaseBdev1", 00:15:13.412 "uuid": "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2", 00:15:13.412 "is_configured": true, 00:15:13.412 "data_offset": 0, 00:15:13.412 "data_size": 65536 00:15:13.412 }, 00:15:13.412 { 00:15:13.412 "name": "BaseBdev2", 00:15:13.412 "uuid": "0bca4d65-c50e-4162-ba61-af4bf2c5aeb3", 00:15:13.412 "is_configured": true, 00:15:13.412 "data_offset": 0, 00:15:13.412 "data_size": 65536 00:15:13.412 }, 00:15:13.412 { 00:15:13.412 "name": "BaseBdev3", 00:15:13.412 "uuid": "3ecbbe40-a1b3-4a81-97ee-834e406a3b36", 00:15:13.412 "is_configured": true, 00:15:13.412 "data_offset": 0, 00:15:13.412 "data_size": 65536 00:15:13.412 } 00:15:13.412 ] 00:15:13.412 }' 00:15:13.412 23:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.412 23:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.978 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:15:13.978 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:13.978 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:13.978 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:13.978 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:13.978 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:13.978 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:13.978 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:14.236 [2024-05-14 23:30:37.424068] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.236 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:14.236 "name": "Existed_Raid", 00:15:14.236 "aliases": [ 00:15:14.236 "dc8fddd8-8f33-44ca-81f0-2f7695a54c66" 00:15:14.236 ], 00:15:14.236 "product_name": "Raid Volume", 00:15:14.236 "block_size": 512, 00:15:14.236 "num_blocks": 196608, 00:15:14.236 "uuid": "dc8fddd8-8f33-44ca-81f0-2f7695a54c66", 00:15:14.236 "assigned_rate_limits": { 00:15:14.236 "rw_ios_per_sec": 0, 00:15:14.236 "rw_mbytes_per_sec": 0, 00:15:14.236 "r_mbytes_per_sec": 0, 00:15:14.236 "w_mbytes_per_sec": 0 00:15:14.236 }, 00:15:14.236 "claimed": false, 00:15:14.236 "zoned": false, 00:15:14.236 "supported_io_types": { 00:15:14.236 "read": true, 00:15:14.236 "write": true, 00:15:14.236 "unmap": true, 00:15:14.236 "write_zeroes": true, 00:15:14.236 "flush": true, 00:15:14.236 "reset": true, 00:15:14.236 "compare": false, 00:15:14.236 "compare_and_write": false, 00:15:14.236 "abort": false, 00:15:14.236 "nvme_admin": false, 00:15:14.236 "nvme_io": false 00:15:14.236 }, 00:15:14.236 "memory_domains": [ 00:15:14.236 { 00:15:14.236 "dma_device_id": "system", 00:15:14.236 "dma_device_type": 1 00:15:14.236 }, 00:15:14.236 { 00:15:14.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.236 "dma_device_type": 2 00:15:14.236 }, 00:15:14.236 { 00:15:14.236 "dma_device_id": "system", 00:15:14.236 "dma_device_type": 1 00:15:14.236 }, 00:15:14.236 { 00:15:14.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.236 "dma_device_type": 2 00:15:14.236 }, 00:15:14.236 { 00:15:14.236 "dma_device_id": "system", 00:15:14.236 "dma_device_type": 1 00:15:14.236 }, 00:15:14.236 { 00:15:14.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.236 "dma_device_type": 2 00:15:14.236 } 00:15:14.236 ], 00:15:14.236 "driver_specific": { 00:15:14.236 "raid": { 00:15:14.236 "uuid": "dc8fddd8-8f33-44ca-81f0-2f7695a54c66", 00:15:14.236 "strip_size_kb": 64, 00:15:14.236 "state": "online", 00:15:14.236 "raid_level": "concat", 00:15:14.236 "superblock": false, 00:15:14.236 "num_base_bdevs": 3, 00:15:14.236 "num_base_bdevs_discovered": 3, 00:15:14.236 "num_base_bdevs_operational": 3, 00:15:14.236 "base_bdevs_list": [ 00:15:14.236 { 00:15:14.236 "name": "BaseBdev1", 00:15:14.236 "uuid": "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2", 00:15:14.236 "is_configured": true, 00:15:14.236 "data_offset": 0, 00:15:14.236 "data_size": 65536 00:15:14.236 }, 00:15:14.236 { 00:15:14.236 "name": "BaseBdev2", 00:15:14.236 "uuid": "0bca4d65-c50e-4162-ba61-af4bf2c5aeb3", 00:15:14.236 "is_configured": true, 00:15:14.236 "data_offset": 0, 00:15:14.236 "data_size": 65536 00:15:14.236 }, 00:15:14.236 { 00:15:14.236 "name": "BaseBdev3", 00:15:14.236 "uuid": "3ecbbe40-a1b3-4a81-97ee-834e406a3b36", 00:15:14.236 "is_configured": true, 00:15:14.236 "data_offset": 0, 00:15:14.236 "data_size": 65536 00:15:14.236 } 00:15:14.236 ] 00:15:14.236 } 00:15:14.236 } 00:15:14.236 }' 00:15:14.236 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:14.236 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:15:14.236 BaseBdev2 00:15:14.236 BaseBdev3' 00:15:14.236 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:14.236 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:14.237 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:14.495 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:14.495 "name": "BaseBdev1", 00:15:14.495 "aliases": [ 00:15:14.495 "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2" 00:15:14.495 ], 00:15:14.495 "product_name": "Malloc disk", 00:15:14.495 "block_size": 512, 00:15:14.495 "num_blocks": 65536, 00:15:14.495 "uuid": "bb7d3db4-ea6d-46de-9df0-0b9d1ca676e2", 00:15:14.495 "assigned_rate_limits": { 00:15:14.495 "rw_ios_per_sec": 0, 00:15:14.495 "rw_mbytes_per_sec": 0, 00:15:14.495 "r_mbytes_per_sec": 0, 00:15:14.495 "w_mbytes_per_sec": 0 00:15:14.495 }, 00:15:14.495 "claimed": true, 00:15:14.495 "claim_type": "exclusive_write", 00:15:14.495 "zoned": false, 00:15:14.495 "supported_io_types": { 00:15:14.495 "read": true, 00:15:14.495 "write": true, 00:15:14.495 "unmap": true, 00:15:14.495 "write_zeroes": true, 00:15:14.495 "flush": true, 00:15:14.495 "reset": true, 00:15:14.495 "compare": false, 00:15:14.495 "compare_and_write": false, 00:15:14.495 "abort": true, 00:15:14.495 "nvme_admin": false, 00:15:14.495 "nvme_io": false 00:15:14.495 }, 00:15:14.495 "memory_domains": [ 00:15:14.495 { 00:15:14.495 "dma_device_id": "system", 00:15:14.495 "dma_device_type": 1 00:15:14.495 }, 00:15:14.495 { 00:15:14.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.495 "dma_device_type": 2 00:15:14.495 } 00:15:14.495 ], 00:15:14.495 "driver_specific": {} 00:15:14.495 }' 00:15:14.495 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:14.495 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:14.753 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:14.754 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:14.754 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:14.754 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.754 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:14.754 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:14.754 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.754 23:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:14.754 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:15.012 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:15.012 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:15.012 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:15.012 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:15.271 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:15.271 "name": "BaseBdev2", 00:15:15.271 "aliases": [ 00:15:15.271 "0bca4d65-c50e-4162-ba61-af4bf2c5aeb3" 00:15:15.271 ], 00:15:15.271 "product_name": "Malloc disk", 00:15:15.271 "block_size": 512, 00:15:15.271 "num_blocks": 65536, 00:15:15.271 "uuid": "0bca4d65-c50e-4162-ba61-af4bf2c5aeb3", 00:15:15.271 "assigned_rate_limits": { 00:15:15.271 "rw_ios_per_sec": 0, 00:15:15.271 "rw_mbytes_per_sec": 0, 00:15:15.271 "r_mbytes_per_sec": 0, 00:15:15.271 "w_mbytes_per_sec": 0 00:15:15.271 }, 00:15:15.271 "claimed": true, 00:15:15.271 "claim_type": "exclusive_write", 00:15:15.271 "zoned": false, 00:15:15.271 "supported_io_types": { 00:15:15.271 "read": true, 00:15:15.271 "write": true, 00:15:15.271 "unmap": true, 00:15:15.271 "write_zeroes": true, 00:15:15.271 "flush": true, 00:15:15.271 "reset": true, 00:15:15.271 "compare": false, 00:15:15.271 "compare_and_write": false, 00:15:15.271 "abort": true, 00:15:15.271 "nvme_admin": false, 00:15:15.271 "nvme_io": false 00:15:15.271 }, 00:15:15.271 "memory_domains": [ 00:15:15.271 { 00:15:15.271 "dma_device_id": "system", 00:15:15.271 "dma_device_type": 1 00:15:15.271 }, 00:15:15.271 { 00:15:15.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.271 "dma_device_type": 2 00:15:15.271 } 00:15:15.271 ], 00:15:15.271 "driver_specific": {} 00:15:15.271 }' 00:15:15.271 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:15.271 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:15.271 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:15.271 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:15.271 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:15.271 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:15.271 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:15.530 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:15.530 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:15.530 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:15.530 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:15.530 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:15.530 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:15.530 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:15.530 23:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:15.789 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:15.789 "name": "BaseBdev3", 00:15:15.789 "aliases": [ 00:15:15.789 "3ecbbe40-a1b3-4a81-97ee-834e406a3b36" 00:15:15.789 ], 00:15:15.789 "product_name": "Malloc disk", 00:15:15.789 "block_size": 512, 00:15:15.789 "num_blocks": 65536, 00:15:15.789 "uuid": "3ecbbe40-a1b3-4a81-97ee-834e406a3b36", 00:15:15.789 "assigned_rate_limits": { 00:15:15.789 "rw_ios_per_sec": 0, 00:15:15.789 "rw_mbytes_per_sec": 0, 00:15:15.789 "r_mbytes_per_sec": 0, 00:15:15.789 "w_mbytes_per_sec": 0 00:15:15.789 }, 00:15:15.789 "claimed": true, 00:15:15.789 "claim_type": "exclusive_write", 00:15:15.789 "zoned": false, 00:15:15.789 "supported_io_types": { 00:15:15.789 "read": true, 00:15:15.789 "write": true, 00:15:15.789 "unmap": true, 00:15:15.789 "write_zeroes": true, 00:15:15.789 "flush": true, 00:15:15.789 "reset": true, 00:15:15.789 "compare": false, 00:15:15.789 "compare_and_write": false, 00:15:15.789 "abort": true, 00:15:15.789 "nvme_admin": false, 00:15:15.789 "nvme_io": false 00:15:15.789 }, 00:15:15.789 "memory_domains": [ 00:15:15.789 { 00:15:15.789 "dma_device_id": "system", 00:15:15.789 "dma_device_type": 1 00:15:15.789 }, 00:15:15.789 { 00:15:15.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.789 "dma_device_type": 2 00:15:15.789 } 00:15:15.789 ], 00:15:15.789 "driver_specific": {} 00:15:15.789 }' 00:15:15.789 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:16.048 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:16.048 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:16.048 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:16.048 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:16.048 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:16.048 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:16.048 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:16.307 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:16.307 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:16.307 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:16.307 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:16.307 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:16.565 [2024-05-14 23:30:39.704382] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.565 [2024-05-14 23:30:39.704418] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.565 [2024-05-14 23:30:39.704464] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.565 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:15:16.565 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:15:16.565 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.566 23:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.824 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.824 "name": "Existed_Raid", 00:15:16.824 "uuid": "dc8fddd8-8f33-44ca-81f0-2f7695a54c66", 00:15:16.824 "strip_size_kb": 64, 00:15:16.824 "state": "offline", 00:15:16.824 "raid_level": "concat", 00:15:16.824 "superblock": false, 00:15:16.824 "num_base_bdevs": 3, 00:15:16.824 "num_base_bdevs_discovered": 2, 00:15:16.824 "num_base_bdevs_operational": 2, 00:15:16.824 "base_bdevs_list": [ 00:15:16.824 { 00:15:16.824 "name": null, 00:15:16.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.824 "is_configured": false, 00:15:16.824 "data_offset": 0, 00:15:16.824 "data_size": 65536 00:15:16.824 }, 00:15:16.824 { 00:15:16.824 "name": "BaseBdev2", 00:15:16.824 "uuid": "0bca4d65-c50e-4162-ba61-af4bf2c5aeb3", 00:15:16.824 "is_configured": true, 00:15:16.824 "data_offset": 0, 00:15:16.824 "data_size": 65536 00:15:16.824 }, 00:15:16.824 { 00:15:16.824 "name": "BaseBdev3", 00:15:16.824 "uuid": "3ecbbe40-a1b3-4a81-97ee-834e406a3b36", 00:15:16.824 "is_configured": true, 00:15:16.824 "data_offset": 0, 00:15:16.824 "data_size": 65536 00:15:16.824 } 00:15:16.824 ] 00:15:16.824 }' 00:15:16.824 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.824 23:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.392 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:17.392 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.392 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.392 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:17.706 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:17.706 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.706 23:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:17.965 [2024-05-14 23:30:41.078591] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:17.965 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:17.965 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.965 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:17.965 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.224 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:18.224 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.224 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:18.482 [2024-05-14 23:30:41.587590] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.482 [2024-05-14 23:30:41.587651] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:15:18.482 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.482 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.482 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.482 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.752 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:15:18.752 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:15:18.752 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:15:18.752 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:15:18.752 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:18.752 23:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:19.011 BaseBdev2 00:15:19.011 23:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:15:19.011 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:19.011 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:19.011 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:19.011 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:19.011 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:19.011 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.270 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:19.529 [ 00:15:19.529 { 00:15:19.529 "name": "BaseBdev2", 00:15:19.529 "aliases": [ 00:15:19.529 "2283f808-7345-4367-be16-1e895f11fded" 00:15:19.529 ], 00:15:19.529 "product_name": "Malloc disk", 00:15:19.529 "block_size": 512, 00:15:19.529 "num_blocks": 65536, 00:15:19.529 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:19.529 "assigned_rate_limits": { 00:15:19.529 "rw_ios_per_sec": 0, 00:15:19.529 "rw_mbytes_per_sec": 0, 00:15:19.529 "r_mbytes_per_sec": 0, 00:15:19.529 "w_mbytes_per_sec": 0 00:15:19.529 }, 00:15:19.529 "claimed": false, 00:15:19.529 "zoned": false, 00:15:19.529 "supported_io_types": { 00:15:19.529 "read": true, 00:15:19.529 "write": true, 00:15:19.529 "unmap": true, 00:15:19.529 "write_zeroes": true, 00:15:19.529 "flush": true, 00:15:19.529 "reset": true, 00:15:19.529 "compare": false, 00:15:19.529 "compare_and_write": false, 00:15:19.529 "abort": true, 00:15:19.529 "nvme_admin": false, 00:15:19.529 "nvme_io": false 00:15:19.529 }, 00:15:19.529 "memory_domains": [ 00:15:19.529 { 00:15:19.529 "dma_device_id": "system", 00:15:19.529 "dma_device_type": 1 00:15:19.529 }, 00:15:19.529 { 00:15:19.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.529 "dma_device_type": 2 00:15:19.529 } 00:15:19.529 ], 00:15:19.529 "driver_specific": {} 00:15:19.529 } 00:15:19.529 ] 00:15:19.529 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:19.529 23:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:19.529 23:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:19.529 23:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:19.787 BaseBdev3 00:15:19.787 23:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:15:19.787 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:19.787 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:19.788 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:19.788 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:19.788 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:19.788 23:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.788 23:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:20.046 [ 00:15:20.046 { 00:15:20.046 "name": "BaseBdev3", 00:15:20.046 "aliases": [ 00:15:20.046 "ef83b458-240f-4885-8ea6-f2142431c238" 00:15:20.046 ], 00:15:20.046 "product_name": "Malloc disk", 00:15:20.046 "block_size": 512, 00:15:20.046 "num_blocks": 65536, 00:15:20.046 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:20.046 "assigned_rate_limits": { 00:15:20.046 "rw_ios_per_sec": 0, 00:15:20.046 "rw_mbytes_per_sec": 0, 00:15:20.046 "r_mbytes_per_sec": 0, 00:15:20.046 "w_mbytes_per_sec": 0 00:15:20.046 }, 00:15:20.046 "claimed": false, 00:15:20.046 "zoned": false, 00:15:20.046 "supported_io_types": { 00:15:20.046 "read": true, 00:15:20.046 "write": true, 00:15:20.046 "unmap": true, 00:15:20.046 "write_zeroes": true, 00:15:20.046 "flush": true, 00:15:20.046 "reset": true, 00:15:20.046 "compare": false, 00:15:20.046 "compare_and_write": false, 00:15:20.046 "abort": true, 00:15:20.046 "nvme_admin": false, 00:15:20.046 "nvme_io": false 00:15:20.046 }, 00:15:20.046 "memory_domains": [ 00:15:20.046 { 00:15:20.046 "dma_device_id": "system", 00:15:20.046 "dma_device_type": 1 00:15:20.046 }, 00:15:20.046 { 00:15:20.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.046 "dma_device_type": 2 00:15:20.046 } 00:15:20.046 ], 00:15:20.046 "driver_specific": {} 00:15:20.046 } 00:15:20.046 ] 00:15:20.046 23:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:20.046 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:20.046 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:20.046 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:20.315 [2024-05-14 23:30:43.439439] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.315 [2024-05-14 23:30:43.439530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.315 [2024-05-14 23:30:43.439555] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.315 [2024-05-14 23:30:43.441052] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.315 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.316 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.316 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.316 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.580 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.580 "name": "Existed_Raid", 00:15:20.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.580 "strip_size_kb": 64, 00:15:20.580 "state": "configuring", 00:15:20.580 "raid_level": "concat", 00:15:20.580 "superblock": false, 00:15:20.580 "num_base_bdevs": 3, 00:15:20.580 "num_base_bdevs_discovered": 2, 00:15:20.580 "num_base_bdevs_operational": 3, 00:15:20.580 "base_bdevs_list": [ 00:15:20.580 { 00:15:20.580 "name": "BaseBdev1", 00:15:20.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.580 "is_configured": false, 00:15:20.580 "data_offset": 0, 00:15:20.580 "data_size": 0 00:15:20.580 }, 00:15:20.580 { 00:15:20.580 "name": "BaseBdev2", 00:15:20.580 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:20.580 "is_configured": true, 00:15:20.580 "data_offset": 0, 00:15:20.580 "data_size": 65536 00:15:20.580 }, 00:15:20.580 { 00:15:20.580 "name": "BaseBdev3", 00:15:20.580 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:20.580 "is_configured": true, 00:15:20.580 "data_offset": 0, 00:15:20.580 "data_size": 65536 00:15:20.580 } 00:15:20.580 ] 00:15:20.580 }' 00:15:20.580 23:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.580 23:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.204 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:21.462 [2024-05-14 23:30:44.583661] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.462 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.720 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.720 "name": "Existed_Raid", 00:15:21.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.720 "strip_size_kb": 64, 00:15:21.721 "state": "configuring", 00:15:21.721 "raid_level": "concat", 00:15:21.721 "superblock": false, 00:15:21.721 "num_base_bdevs": 3, 00:15:21.721 "num_base_bdevs_discovered": 1, 00:15:21.721 "num_base_bdevs_operational": 3, 00:15:21.721 "base_bdevs_list": [ 00:15:21.721 { 00:15:21.721 "name": "BaseBdev1", 00:15:21.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.721 "is_configured": false, 00:15:21.721 "data_offset": 0, 00:15:21.721 "data_size": 0 00:15:21.721 }, 00:15:21.721 { 00:15:21.721 "name": null, 00:15:21.721 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:21.721 "is_configured": false, 00:15:21.721 "data_offset": 0, 00:15:21.721 "data_size": 65536 00:15:21.721 }, 00:15:21.721 { 00:15:21.721 "name": "BaseBdev3", 00:15:21.721 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:21.721 "is_configured": true, 00:15:21.721 "data_offset": 0, 00:15:21.721 "data_size": 65536 00:15:21.721 } 00:15:21.721 ] 00:15:21.721 }' 00:15:21.721 23:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.721 23:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.289 23:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.289 23:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:22.547 23:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:15:22.547 23:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.806 [2024-05-14 23:30:45.919126] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.806 BaseBdev1 00:15:22.806 23:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:15:22.806 23:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:22.806 23:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:22.806 23:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:22.806 23:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:22.806 23:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:22.806 23:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.064 23:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.064 [ 00:15:23.064 { 00:15:23.064 "name": "BaseBdev1", 00:15:23.064 "aliases": [ 00:15:23.064 "de254533-adb2-4139-ad56-17124cccce60" 00:15:23.064 ], 00:15:23.064 "product_name": "Malloc disk", 00:15:23.064 "block_size": 512, 00:15:23.064 "num_blocks": 65536, 00:15:23.064 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:23.064 "assigned_rate_limits": { 00:15:23.064 "rw_ios_per_sec": 0, 00:15:23.064 "rw_mbytes_per_sec": 0, 00:15:23.064 "r_mbytes_per_sec": 0, 00:15:23.064 "w_mbytes_per_sec": 0 00:15:23.064 }, 00:15:23.064 "claimed": true, 00:15:23.064 "claim_type": "exclusive_write", 00:15:23.064 "zoned": false, 00:15:23.064 "supported_io_types": { 00:15:23.064 "read": true, 00:15:23.064 "write": true, 00:15:23.064 "unmap": true, 00:15:23.064 "write_zeroes": true, 00:15:23.064 "flush": true, 00:15:23.064 "reset": true, 00:15:23.064 "compare": false, 00:15:23.064 "compare_and_write": false, 00:15:23.064 "abort": true, 00:15:23.064 "nvme_admin": false, 00:15:23.064 "nvme_io": false 00:15:23.064 }, 00:15:23.064 "memory_domains": [ 00:15:23.064 { 00:15:23.064 "dma_device_id": "system", 00:15:23.064 "dma_device_type": 1 00:15:23.064 }, 00:15:23.064 { 00:15:23.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.064 "dma_device_type": 2 00:15:23.064 } 00:15:23.064 ], 00:15:23.064 "driver_specific": {} 00:15:23.064 } 00:15:23.064 ] 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.065 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.323 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.323 "name": "Existed_Raid", 00:15:23.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.323 "strip_size_kb": 64, 00:15:23.323 "state": "configuring", 00:15:23.323 "raid_level": "concat", 00:15:23.323 "superblock": false, 00:15:23.323 "num_base_bdevs": 3, 00:15:23.323 "num_base_bdevs_discovered": 2, 00:15:23.323 "num_base_bdevs_operational": 3, 00:15:23.323 "base_bdevs_list": [ 00:15:23.323 { 00:15:23.323 "name": "BaseBdev1", 00:15:23.323 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:23.323 "is_configured": true, 00:15:23.323 "data_offset": 0, 00:15:23.323 "data_size": 65536 00:15:23.323 }, 00:15:23.323 { 00:15:23.323 "name": null, 00:15:23.323 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:23.323 "is_configured": false, 00:15:23.323 "data_offset": 0, 00:15:23.323 "data_size": 65536 00:15:23.323 }, 00:15:23.323 { 00:15:23.323 "name": "BaseBdev3", 00:15:23.323 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:23.323 "is_configured": true, 00:15:23.323 "data_offset": 0, 00:15:23.323 "data_size": 65536 00:15:23.323 } 00:15:23.323 ] 00:15:23.323 }' 00:15:23.323 23:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.323 23:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.260 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.260 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:24.260 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:24.260 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:24.519 [2024-05-14 23:30:47.619454] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.519 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.520 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.520 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.780 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.780 "name": "Existed_Raid", 00:15:24.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.780 "strip_size_kb": 64, 00:15:24.780 "state": "configuring", 00:15:24.780 "raid_level": "concat", 00:15:24.780 "superblock": false, 00:15:24.780 "num_base_bdevs": 3, 00:15:24.780 "num_base_bdevs_discovered": 1, 00:15:24.780 "num_base_bdevs_operational": 3, 00:15:24.780 "base_bdevs_list": [ 00:15:24.780 { 00:15:24.780 "name": "BaseBdev1", 00:15:24.780 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:24.780 "is_configured": true, 00:15:24.780 "data_offset": 0, 00:15:24.780 "data_size": 65536 00:15:24.780 }, 00:15:24.780 { 00:15:24.780 "name": null, 00:15:24.780 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:24.780 "is_configured": false, 00:15:24.780 "data_offset": 0, 00:15:24.780 "data_size": 65536 00:15:24.780 }, 00:15:24.780 { 00:15:24.780 "name": null, 00:15:24.780 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:24.780 "is_configured": false, 00:15:24.780 "data_offset": 0, 00:15:24.780 "data_size": 65536 00:15:24.780 } 00:15:24.780 ] 00:15:24.780 }' 00:15:24.780 23:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.780 23:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.348 23:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.348 23:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:25.607 23:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:15:25.607 23:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:25.865 [2024-05-14 23:30:49.047696] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.865 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.123 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.123 "name": "Existed_Raid", 00:15:26.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.123 "strip_size_kb": 64, 00:15:26.123 "state": "configuring", 00:15:26.123 "raid_level": "concat", 00:15:26.123 "superblock": false, 00:15:26.123 "num_base_bdevs": 3, 00:15:26.123 "num_base_bdevs_discovered": 2, 00:15:26.123 "num_base_bdevs_operational": 3, 00:15:26.123 "base_bdevs_list": [ 00:15:26.123 { 00:15:26.123 "name": "BaseBdev1", 00:15:26.123 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:26.123 "is_configured": true, 00:15:26.123 "data_offset": 0, 00:15:26.123 "data_size": 65536 00:15:26.123 }, 00:15:26.123 { 00:15:26.123 "name": null, 00:15:26.123 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:26.123 "is_configured": false, 00:15:26.123 "data_offset": 0, 00:15:26.123 "data_size": 65536 00:15:26.123 }, 00:15:26.123 { 00:15:26.123 "name": "BaseBdev3", 00:15:26.123 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:26.123 "is_configured": true, 00:15:26.123 "data_offset": 0, 00:15:26.123 "data_size": 65536 00:15:26.123 } 00:15:26.123 ] 00:15:26.123 }' 00:15:26.123 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.123 23:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.060 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.060 23:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:27.060 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:15:27.060 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:27.319 [2024-05-14 23:30:50.512022] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.578 "name": "Existed_Raid", 00:15:27.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.578 "strip_size_kb": 64, 00:15:27.578 "state": "configuring", 00:15:27.578 "raid_level": "concat", 00:15:27.578 "superblock": false, 00:15:27.578 "num_base_bdevs": 3, 00:15:27.578 "num_base_bdevs_discovered": 1, 00:15:27.578 "num_base_bdevs_operational": 3, 00:15:27.578 "base_bdevs_list": [ 00:15:27.578 { 00:15:27.578 "name": null, 00:15:27.578 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:27.578 "is_configured": false, 00:15:27.578 "data_offset": 0, 00:15:27.578 "data_size": 65536 00:15:27.578 }, 00:15:27.578 { 00:15:27.578 "name": null, 00:15:27.578 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:27.578 "is_configured": false, 00:15:27.578 "data_offset": 0, 00:15:27.578 "data_size": 65536 00:15:27.578 }, 00:15:27.578 { 00:15:27.578 "name": "BaseBdev3", 00:15:27.578 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:27.578 "is_configured": true, 00:15:27.578 "data_offset": 0, 00:15:27.578 "data_size": 65536 00:15:27.578 } 00:15:27.578 ] 00:15:27.578 }' 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.578 23:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.516 23:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.516 23:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:28.786 23:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:15:28.786 23:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:29.055 [2024-05-14 23:30:52.072014] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.055 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.055 "name": "Existed_Raid", 00:15:29.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.055 "strip_size_kb": 64, 00:15:29.055 "state": "configuring", 00:15:29.055 "raid_level": "concat", 00:15:29.055 "superblock": false, 00:15:29.055 "num_base_bdevs": 3, 00:15:29.055 "num_base_bdevs_discovered": 2, 00:15:29.055 "num_base_bdevs_operational": 3, 00:15:29.055 "base_bdevs_list": [ 00:15:29.056 { 00:15:29.056 "name": null, 00:15:29.056 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:29.056 "is_configured": false, 00:15:29.056 "data_offset": 0, 00:15:29.056 "data_size": 65536 00:15:29.056 }, 00:15:29.056 { 00:15:29.056 "name": "BaseBdev2", 00:15:29.056 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:29.056 "is_configured": true, 00:15:29.056 "data_offset": 0, 00:15:29.056 "data_size": 65536 00:15:29.056 }, 00:15:29.056 { 00:15:29.056 "name": "BaseBdev3", 00:15:29.056 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:29.056 "is_configured": true, 00:15:29.056 "data_offset": 0, 00:15:29.056 "data_size": 65536 00:15:29.056 } 00:15:29.056 ] 00:15:29.056 }' 00:15:29.056 23:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.056 23:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.994 23:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.994 23:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.994 23:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:15:29.994 23:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.994 23:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:30.253 23:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u de254533-adb2-4139-ad56-17124cccce60 00:15:30.511 [2024-05-14 23:30:53.646214] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:30.511 [2024-05-14 23:30:53.646260] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:15:30.511 [2024-05-14 23:30:53.646272] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:30.511 [2024-05-14 23:30:53.646393] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:30.511 [2024-05-14 23:30:53.646669] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:15:30.511 [2024-05-14 23:30:53.646713] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:15:30.511 [2024-05-14 23:30:53.646933] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.511 NewBaseBdev 00:15:30.511 23:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:15:30.511 23:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:15:30.511 23:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:30.511 23:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:30.511 23:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:30.511 23:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:30.511 23:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.769 23:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:31.028 [ 00:15:31.028 { 00:15:31.028 "name": "NewBaseBdev", 00:15:31.028 "aliases": [ 00:15:31.028 "de254533-adb2-4139-ad56-17124cccce60" 00:15:31.028 ], 00:15:31.028 "product_name": "Malloc disk", 00:15:31.028 "block_size": 512, 00:15:31.028 "num_blocks": 65536, 00:15:31.028 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:31.028 "assigned_rate_limits": { 00:15:31.028 "rw_ios_per_sec": 0, 00:15:31.028 "rw_mbytes_per_sec": 0, 00:15:31.028 "r_mbytes_per_sec": 0, 00:15:31.028 "w_mbytes_per_sec": 0 00:15:31.028 }, 00:15:31.028 "claimed": true, 00:15:31.028 "claim_type": "exclusive_write", 00:15:31.028 "zoned": false, 00:15:31.028 "supported_io_types": { 00:15:31.028 "read": true, 00:15:31.028 "write": true, 00:15:31.028 "unmap": true, 00:15:31.028 "write_zeroes": true, 00:15:31.028 "flush": true, 00:15:31.028 "reset": true, 00:15:31.028 "compare": false, 00:15:31.028 "compare_and_write": false, 00:15:31.028 "abort": true, 00:15:31.028 "nvme_admin": false, 00:15:31.028 "nvme_io": false 00:15:31.028 }, 00:15:31.028 "memory_domains": [ 00:15:31.028 { 00:15:31.028 "dma_device_id": "system", 00:15:31.028 "dma_device_type": 1 00:15:31.028 }, 00:15:31.028 { 00:15:31.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.028 "dma_device_type": 2 00:15:31.028 } 00:15:31.028 ], 00:15:31.028 "driver_specific": {} 00:15:31.028 } 00:15:31.028 ] 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.028 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.287 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.287 "name": "Existed_Raid", 00:15:31.287 "uuid": "85a07257-5511-493e-980a-7ae002ed6f5f", 00:15:31.287 "strip_size_kb": 64, 00:15:31.287 "state": "online", 00:15:31.287 "raid_level": "concat", 00:15:31.287 "superblock": false, 00:15:31.287 "num_base_bdevs": 3, 00:15:31.287 "num_base_bdevs_discovered": 3, 00:15:31.287 "num_base_bdevs_operational": 3, 00:15:31.287 "base_bdevs_list": [ 00:15:31.287 { 00:15:31.287 "name": "NewBaseBdev", 00:15:31.287 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:31.287 "is_configured": true, 00:15:31.287 "data_offset": 0, 00:15:31.287 "data_size": 65536 00:15:31.287 }, 00:15:31.287 { 00:15:31.287 "name": "BaseBdev2", 00:15:31.287 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:31.287 "is_configured": true, 00:15:31.287 "data_offset": 0, 00:15:31.287 "data_size": 65536 00:15:31.287 }, 00:15:31.287 { 00:15:31.287 "name": "BaseBdev3", 00:15:31.287 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:31.287 "is_configured": true, 00:15:31.287 "data_offset": 0, 00:15:31.287 "data_size": 65536 00:15:31.287 } 00:15:31.287 ] 00:15:31.287 }' 00:15:31.287 23:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.287 23:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.854 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.854 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:31.854 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:31.854 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:31.854 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:31.854 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:15:31.854 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:31.854 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:32.113 [2024-05-14 23:30:55.198732] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.113 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:32.113 "name": "Existed_Raid", 00:15:32.113 "aliases": [ 00:15:32.113 "85a07257-5511-493e-980a-7ae002ed6f5f" 00:15:32.113 ], 00:15:32.113 "product_name": "Raid Volume", 00:15:32.113 "block_size": 512, 00:15:32.113 "num_blocks": 196608, 00:15:32.113 "uuid": "85a07257-5511-493e-980a-7ae002ed6f5f", 00:15:32.113 "assigned_rate_limits": { 00:15:32.113 "rw_ios_per_sec": 0, 00:15:32.113 "rw_mbytes_per_sec": 0, 00:15:32.113 "r_mbytes_per_sec": 0, 00:15:32.113 "w_mbytes_per_sec": 0 00:15:32.113 }, 00:15:32.113 "claimed": false, 00:15:32.113 "zoned": false, 00:15:32.113 "supported_io_types": { 00:15:32.113 "read": true, 00:15:32.113 "write": true, 00:15:32.113 "unmap": true, 00:15:32.113 "write_zeroes": true, 00:15:32.113 "flush": true, 00:15:32.113 "reset": true, 00:15:32.113 "compare": false, 00:15:32.113 "compare_and_write": false, 00:15:32.113 "abort": false, 00:15:32.113 "nvme_admin": false, 00:15:32.113 "nvme_io": false 00:15:32.113 }, 00:15:32.113 "memory_domains": [ 00:15:32.113 { 00:15:32.113 "dma_device_id": "system", 00:15:32.113 "dma_device_type": 1 00:15:32.113 }, 00:15:32.113 { 00:15:32.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.113 "dma_device_type": 2 00:15:32.113 }, 00:15:32.113 { 00:15:32.113 "dma_device_id": "system", 00:15:32.113 "dma_device_type": 1 00:15:32.113 }, 00:15:32.113 { 00:15:32.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.113 "dma_device_type": 2 00:15:32.113 }, 00:15:32.113 { 00:15:32.113 "dma_device_id": "system", 00:15:32.113 "dma_device_type": 1 00:15:32.113 }, 00:15:32.113 { 00:15:32.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.113 "dma_device_type": 2 00:15:32.113 } 00:15:32.113 ], 00:15:32.113 "driver_specific": { 00:15:32.113 "raid": { 00:15:32.113 "uuid": "85a07257-5511-493e-980a-7ae002ed6f5f", 00:15:32.113 "strip_size_kb": 64, 00:15:32.113 "state": "online", 00:15:32.113 "raid_level": "concat", 00:15:32.113 "superblock": false, 00:15:32.113 "num_base_bdevs": 3, 00:15:32.113 "num_base_bdevs_discovered": 3, 00:15:32.113 "num_base_bdevs_operational": 3, 00:15:32.113 "base_bdevs_list": [ 00:15:32.113 { 00:15:32.113 "name": "NewBaseBdev", 00:15:32.113 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:32.113 "is_configured": true, 00:15:32.113 "data_offset": 0, 00:15:32.113 "data_size": 65536 00:15:32.113 }, 00:15:32.113 { 00:15:32.113 "name": "BaseBdev2", 00:15:32.113 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:32.113 "is_configured": true, 00:15:32.113 "data_offset": 0, 00:15:32.113 "data_size": 65536 00:15:32.113 }, 00:15:32.113 { 00:15:32.113 "name": "BaseBdev3", 00:15:32.113 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:32.113 "is_configured": true, 00:15:32.113 "data_offset": 0, 00:15:32.113 "data_size": 65536 00:15:32.113 } 00:15:32.113 ] 00:15:32.113 } 00:15:32.113 } 00:15:32.113 }' 00:15:32.113 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.113 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:15:32.113 BaseBdev2 00:15:32.113 BaseBdev3' 00:15:32.113 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:32.113 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:32.113 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:32.372 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:32.372 "name": "NewBaseBdev", 00:15:32.372 "aliases": [ 00:15:32.372 "de254533-adb2-4139-ad56-17124cccce60" 00:15:32.372 ], 00:15:32.372 "product_name": "Malloc disk", 00:15:32.372 "block_size": 512, 00:15:32.372 "num_blocks": 65536, 00:15:32.372 "uuid": "de254533-adb2-4139-ad56-17124cccce60", 00:15:32.372 "assigned_rate_limits": { 00:15:32.372 "rw_ios_per_sec": 0, 00:15:32.372 "rw_mbytes_per_sec": 0, 00:15:32.372 "r_mbytes_per_sec": 0, 00:15:32.372 "w_mbytes_per_sec": 0 00:15:32.372 }, 00:15:32.372 "claimed": true, 00:15:32.372 "claim_type": "exclusive_write", 00:15:32.372 "zoned": false, 00:15:32.372 "supported_io_types": { 00:15:32.372 "read": true, 00:15:32.372 "write": true, 00:15:32.372 "unmap": true, 00:15:32.372 "write_zeroes": true, 00:15:32.372 "flush": true, 00:15:32.372 "reset": true, 00:15:32.372 "compare": false, 00:15:32.372 "compare_and_write": false, 00:15:32.372 "abort": true, 00:15:32.372 "nvme_admin": false, 00:15:32.372 "nvme_io": false 00:15:32.372 }, 00:15:32.372 "memory_domains": [ 00:15:32.372 { 00:15:32.372 "dma_device_id": "system", 00:15:32.372 "dma_device_type": 1 00:15:32.372 }, 00:15:32.372 { 00:15:32.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.372 "dma_device_type": 2 00:15:32.372 } 00:15:32.372 ], 00:15:32.372 "driver_specific": {} 00:15:32.372 }' 00:15:32.372 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.372 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.372 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:32.372 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.372 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:32.631 23:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:32.889 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:32.889 "name": "BaseBdev2", 00:15:32.889 "aliases": [ 00:15:32.889 "2283f808-7345-4367-be16-1e895f11fded" 00:15:32.889 ], 00:15:32.889 "product_name": "Malloc disk", 00:15:32.889 "block_size": 512, 00:15:32.889 "num_blocks": 65536, 00:15:32.889 "uuid": "2283f808-7345-4367-be16-1e895f11fded", 00:15:32.889 "assigned_rate_limits": { 00:15:32.889 "rw_ios_per_sec": 0, 00:15:32.889 "rw_mbytes_per_sec": 0, 00:15:32.889 "r_mbytes_per_sec": 0, 00:15:32.889 "w_mbytes_per_sec": 0 00:15:32.889 }, 00:15:32.889 "claimed": true, 00:15:32.889 "claim_type": "exclusive_write", 00:15:32.890 "zoned": false, 00:15:32.890 "supported_io_types": { 00:15:32.890 "read": true, 00:15:32.890 "write": true, 00:15:32.890 "unmap": true, 00:15:32.890 "write_zeroes": true, 00:15:32.890 "flush": true, 00:15:32.890 "reset": true, 00:15:32.890 "compare": false, 00:15:32.890 "compare_and_write": false, 00:15:32.890 "abort": true, 00:15:32.890 "nvme_admin": false, 00:15:32.890 "nvme_io": false 00:15:32.890 }, 00:15:32.890 "memory_domains": [ 00:15:32.890 { 00:15:32.890 "dma_device_id": "system", 00:15:32.890 "dma_device_type": 1 00:15:32.890 }, 00:15:32.890 { 00:15:32.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.890 "dma_device_type": 2 00:15:32.890 } 00:15:32.890 ], 00:15:32.890 "driver_specific": {} 00:15:32.890 }' 00:15:32.890 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:32.890 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:33.148 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:33.148 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:33.148 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:33.148 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.148 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:33.148 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:33.148 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.148 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:33.407 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:33.407 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:33.407 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:33.407 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:33.407 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:33.665 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:33.665 "name": "BaseBdev3", 00:15:33.665 "aliases": [ 00:15:33.665 "ef83b458-240f-4885-8ea6-f2142431c238" 00:15:33.665 ], 00:15:33.665 "product_name": "Malloc disk", 00:15:33.665 "block_size": 512, 00:15:33.665 "num_blocks": 65536, 00:15:33.665 "uuid": "ef83b458-240f-4885-8ea6-f2142431c238", 00:15:33.665 "assigned_rate_limits": { 00:15:33.665 "rw_ios_per_sec": 0, 00:15:33.665 "rw_mbytes_per_sec": 0, 00:15:33.665 "r_mbytes_per_sec": 0, 00:15:33.665 "w_mbytes_per_sec": 0 00:15:33.665 }, 00:15:33.665 "claimed": true, 00:15:33.665 "claim_type": "exclusive_write", 00:15:33.665 "zoned": false, 00:15:33.665 "supported_io_types": { 00:15:33.665 "read": true, 00:15:33.665 "write": true, 00:15:33.665 "unmap": true, 00:15:33.665 "write_zeroes": true, 00:15:33.665 "flush": true, 00:15:33.665 "reset": true, 00:15:33.665 "compare": false, 00:15:33.665 "compare_and_write": false, 00:15:33.665 "abort": true, 00:15:33.665 "nvme_admin": false, 00:15:33.665 "nvme_io": false 00:15:33.665 }, 00:15:33.665 "memory_domains": [ 00:15:33.665 { 00:15:33.665 "dma_device_id": "system", 00:15:33.665 "dma_device_type": 1 00:15:33.665 }, 00:15:33.665 { 00:15:33.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.665 "dma_device_type": 2 00:15:33.665 } 00:15:33.665 ], 00:15:33.665 "driver_specific": {} 00:15:33.665 }' 00:15:33.665 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:33.665 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:33.665 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:33.665 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:33.665 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:33.924 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.924 23:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:33.924 23:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:33.924 23:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.924 23:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:33.924 23:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:34.182 [2024-05-14 23:30:57.394915] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.182 [2024-05-14 23:30:57.394954] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.182 [2024-05-14 23:30:57.395075] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.182 [2024-05-14 23:30:57.395114] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.182 [2024-05-14 23:30:57.395124] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 58905 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 58905 ']' 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 58905 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58905 00:15:34.182 killing process with pid 58905 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58905' 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 58905 00:15:34.182 23:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 58905 00:15:34.182 [2024-05-14 23:30:57.430051] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.440 [2024-05-14 23:30:57.645619] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.814 ************************************ 00:15:35.815 END TEST raid_state_function_test 00:15:35.815 ************************************ 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:15:35.815 00:15:35.815 real 0m30.111s 00:15:35.815 user 0m56.739s 00:15:35.815 sys 0m3.048s 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.815 23:30:58 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:35.815 23:30:58 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:35.815 23:30:58 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:35.815 23:30:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.815 ************************************ 00:15:35.815 START TEST raid_state_function_test_sb 00:15:35.815 ************************************ 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:15:35.815 Process raid pid: 59900 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=59900 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 59900' 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 59900 /var/tmp/spdk-raid.sock 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 59900 ']' 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:35.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:35.815 23:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.815 [2024-05-14 23:30:58.984015] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:15:35.815 [2024-05-14 23:30:58.984640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.073 [2024-05-14 23:30:59.147732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.332 [2024-05-14 23:30:59.390516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.332 [2024-05-14 23:30:59.593218] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.590 23:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:36.590 23:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:36.590 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:36.849 [2024-05-14 23:30:59.970902] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.849 [2024-05-14 23:30:59.970992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.849 [2024-05-14 23:30:59.971014] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.849 [2024-05-14 23:30:59.971070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.849 [2024-05-14 23:30:59.971082] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.849 [2024-05-14 23:30:59.971139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.849 23:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.106 23:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.106 "name": "Existed_Raid", 00:15:37.106 "uuid": "4a19666a-52ea-4d93-b683-1876c4a682c6", 00:15:37.106 "strip_size_kb": 64, 00:15:37.106 "state": "configuring", 00:15:37.106 "raid_level": "concat", 00:15:37.106 "superblock": true, 00:15:37.106 "num_base_bdevs": 3, 00:15:37.106 "num_base_bdevs_discovered": 0, 00:15:37.106 "num_base_bdevs_operational": 3, 00:15:37.106 "base_bdevs_list": [ 00:15:37.106 { 00:15:37.106 "name": "BaseBdev1", 00:15:37.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.106 "is_configured": false, 00:15:37.106 "data_offset": 0, 00:15:37.106 "data_size": 0 00:15:37.106 }, 00:15:37.106 { 00:15:37.106 "name": "BaseBdev2", 00:15:37.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.106 "is_configured": false, 00:15:37.106 "data_offset": 0, 00:15:37.106 "data_size": 0 00:15:37.106 }, 00:15:37.106 { 00:15:37.106 "name": "BaseBdev3", 00:15:37.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.106 "is_configured": false, 00:15:37.106 "data_offset": 0, 00:15:37.106 "data_size": 0 00:15:37.106 } 00:15:37.106 ] 00:15:37.106 }' 00:15:37.106 23:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.106 23:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.671 23:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.929 [2024-05-14 23:31:01.050879] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.929 [2024-05-14 23:31:01.050922] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:15:37.929 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:38.188 [2024-05-14 23:31:01.238953] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.188 [2024-05-14 23:31:01.239081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.188 [2024-05-14 23:31:01.239111] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.188 [2024-05-14 23:31:01.239137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.188 [2024-05-14 23:31:01.239146] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.188 [2024-05-14 23:31:01.239441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.188 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.188 [2024-05-14 23:31:01.461244] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.188 BaseBdev1 00:15:38.447 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:15:38.447 23:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:38.447 23:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:38.447 23:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:38.447 23:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:38.447 23:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:38.448 23:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.448 23:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.707 [ 00:15:38.707 { 00:15:38.707 "name": "BaseBdev1", 00:15:38.707 "aliases": [ 00:15:38.707 "b92350b4-1a91-4f68-b435-d0fbaaa3f16a" 00:15:38.707 ], 00:15:38.707 "product_name": "Malloc disk", 00:15:38.707 "block_size": 512, 00:15:38.707 "num_blocks": 65536, 00:15:38.707 "uuid": "b92350b4-1a91-4f68-b435-d0fbaaa3f16a", 00:15:38.707 "assigned_rate_limits": { 00:15:38.707 "rw_ios_per_sec": 0, 00:15:38.707 "rw_mbytes_per_sec": 0, 00:15:38.707 "r_mbytes_per_sec": 0, 00:15:38.707 "w_mbytes_per_sec": 0 00:15:38.707 }, 00:15:38.707 "claimed": true, 00:15:38.707 "claim_type": "exclusive_write", 00:15:38.707 "zoned": false, 00:15:38.707 "supported_io_types": { 00:15:38.707 "read": true, 00:15:38.707 "write": true, 00:15:38.707 "unmap": true, 00:15:38.707 "write_zeroes": true, 00:15:38.707 "flush": true, 00:15:38.707 "reset": true, 00:15:38.707 "compare": false, 00:15:38.707 "compare_and_write": false, 00:15:38.707 "abort": true, 00:15:38.707 "nvme_admin": false, 00:15:38.707 "nvme_io": false 00:15:38.707 }, 00:15:38.707 "memory_domains": [ 00:15:38.707 { 00:15:38.707 "dma_device_id": "system", 00:15:38.707 "dma_device_type": 1 00:15:38.707 }, 00:15:38.707 { 00:15:38.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.707 "dma_device_type": 2 00:15:38.707 } 00:15:38.707 ], 00:15:38.707 "driver_specific": {} 00:15:38.707 } 00:15:38.707 ] 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.707 23:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.040 23:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.040 "name": "Existed_Raid", 00:15:39.040 "uuid": "2d650587-108a-4999-889c-8e1cf83b3add", 00:15:39.040 "strip_size_kb": 64, 00:15:39.040 "state": "configuring", 00:15:39.040 "raid_level": "concat", 00:15:39.040 "superblock": true, 00:15:39.040 "num_base_bdevs": 3, 00:15:39.040 "num_base_bdevs_discovered": 1, 00:15:39.040 "num_base_bdevs_operational": 3, 00:15:39.040 "base_bdevs_list": [ 00:15:39.040 { 00:15:39.040 "name": "BaseBdev1", 00:15:39.040 "uuid": "b92350b4-1a91-4f68-b435-d0fbaaa3f16a", 00:15:39.040 "is_configured": true, 00:15:39.040 "data_offset": 2048, 00:15:39.040 "data_size": 63488 00:15:39.040 }, 00:15:39.040 { 00:15:39.040 "name": "BaseBdev2", 00:15:39.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.040 "is_configured": false, 00:15:39.040 "data_offset": 0, 00:15:39.040 "data_size": 0 00:15:39.040 }, 00:15:39.040 { 00:15:39.040 "name": "BaseBdev3", 00:15:39.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.040 "is_configured": false, 00:15:39.040 "data_offset": 0, 00:15:39.040 "data_size": 0 00:15:39.040 } 00:15:39.040 ] 00:15:39.040 }' 00:15:39.040 23:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.040 23:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.607 23:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:39.607 [2024-05-14 23:31:02.849467] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.607 [2024-05-14 23:31:02.849515] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:15:39.607 23:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:39.866 [2024-05-14 23:31:03.045599] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.866 [2024-05-14 23:31:03.047283] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.866 [2024-05-14 23:31:03.047338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.866 [2024-05-14 23:31:03.047367] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.866 [2024-05-14 23:31:03.047394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.866 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.125 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.125 "name": "Existed_Raid", 00:15:40.125 "uuid": "333858d7-3619-421a-ad86-da4f32ea4df5", 00:15:40.125 "strip_size_kb": 64, 00:15:40.125 "state": "configuring", 00:15:40.125 "raid_level": "concat", 00:15:40.125 "superblock": true, 00:15:40.125 "num_base_bdevs": 3, 00:15:40.125 "num_base_bdevs_discovered": 1, 00:15:40.125 "num_base_bdevs_operational": 3, 00:15:40.125 "base_bdevs_list": [ 00:15:40.125 { 00:15:40.125 "name": "BaseBdev1", 00:15:40.125 "uuid": "b92350b4-1a91-4f68-b435-d0fbaaa3f16a", 00:15:40.125 "is_configured": true, 00:15:40.125 "data_offset": 2048, 00:15:40.125 "data_size": 63488 00:15:40.125 }, 00:15:40.125 { 00:15:40.125 "name": "BaseBdev2", 00:15:40.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.125 "is_configured": false, 00:15:40.125 "data_offset": 0, 00:15:40.125 "data_size": 0 00:15:40.125 }, 00:15:40.125 { 00:15:40.125 "name": "BaseBdev3", 00:15:40.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.125 "is_configured": false, 00:15:40.125 "data_offset": 0, 00:15:40.125 "data_size": 0 00:15:40.125 } 00:15:40.125 ] 00:15:40.125 }' 00:15:40.125 23:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.125 23:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.061 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.061 [2024-05-14 23:31:04.267596] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.061 BaseBdev2 00:15:41.061 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:15:41.061 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:41.061 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:41.061 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:41.061 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:41.061 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:41.061 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.319 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.578 [ 00:15:41.578 { 00:15:41.578 "name": "BaseBdev2", 00:15:41.578 "aliases": [ 00:15:41.578 "ed150ceb-6bdb-4f43-ba12-61c8f7a23c3e" 00:15:41.578 ], 00:15:41.578 "product_name": "Malloc disk", 00:15:41.578 "block_size": 512, 00:15:41.578 "num_blocks": 65536, 00:15:41.578 "uuid": "ed150ceb-6bdb-4f43-ba12-61c8f7a23c3e", 00:15:41.578 "assigned_rate_limits": { 00:15:41.578 "rw_ios_per_sec": 0, 00:15:41.578 "rw_mbytes_per_sec": 0, 00:15:41.578 "r_mbytes_per_sec": 0, 00:15:41.578 "w_mbytes_per_sec": 0 00:15:41.578 }, 00:15:41.578 "claimed": true, 00:15:41.578 "claim_type": "exclusive_write", 00:15:41.578 "zoned": false, 00:15:41.578 "supported_io_types": { 00:15:41.578 "read": true, 00:15:41.578 "write": true, 00:15:41.578 "unmap": true, 00:15:41.578 "write_zeroes": true, 00:15:41.578 "flush": true, 00:15:41.578 "reset": true, 00:15:41.578 "compare": false, 00:15:41.578 "compare_and_write": false, 00:15:41.578 "abort": true, 00:15:41.578 "nvme_admin": false, 00:15:41.578 "nvme_io": false 00:15:41.578 }, 00:15:41.578 "memory_domains": [ 00:15:41.578 { 00:15:41.578 "dma_device_id": "system", 00:15:41.578 "dma_device_type": 1 00:15:41.578 }, 00:15:41.578 { 00:15:41.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.578 "dma_device_type": 2 00:15:41.578 } 00:15:41.578 ], 00:15:41.578 "driver_specific": {} 00:15:41.578 } 00:15:41.578 ] 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.578 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.836 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.836 "name": "Existed_Raid", 00:15:41.836 "uuid": "333858d7-3619-421a-ad86-da4f32ea4df5", 00:15:41.836 "strip_size_kb": 64, 00:15:41.836 "state": "configuring", 00:15:41.836 "raid_level": "concat", 00:15:41.836 "superblock": true, 00:15:41.836 "num_base_bdevs": 3, 00:15:41.836 "num_base_bdevs_discovered": 2, 00:15:41.836 "num_base_bdevs_operational": 3, 00:15:41.836 "base_bdevs_list": [ 00:15:41.836 { 00:15:41.836 "name": "BaseBdev1", 00:15:41.836 "uuid": "b92350b4-1a91-4f68-b435-d0fbaaa3f16a", 00:15:41.836 "is_configured": true, 00:15:41.836 "data_offset": 2048, 00:15:41.836 "data_size": 63488 00:15:41.836 }, 00:15:41.836 { 00:15:41.836 "name": "BaseBdev2", 00:15:41.836 "uuid": "ed150ceb-6bdb-4f43-ba12-61c8f7a23c3e", 00:15:41.836 "is_configured": true, 00:15:41.836 "data_offset": 2048, 00:15:41.836 "data_size": 63488 00:15:41.836 }, 00:15:41.836 { 00:15:41.836 "name": "BaseBdev3", 00:15:41.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.836 "is_configured": false, 00:15:41.836 "data_offset": 0, 00:15:41.836 "data_size": 0 00:15:41.836 } 00:15:41.836 ] 00:15:41.836 }' 00:15:41.836 23:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.836 23:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.414 23:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:42.685 [2024-05-14 23:31:05.888380] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.685 [2024-05-14 23:31:05.888581] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:15:42.685 [2024-05-14 23:31:05.888596] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:42.685 [2024-05-14 23:31:05.888693] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:42.685 [2024-05-14 23:31:05.888939] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:15:42.685 [2024-05-14 23:31:05.888954] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:15:42.685 BaseBdev3 00:15:42.685 [2024-05-14 23:31:05.889298] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.685 23:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:15:42.685 23:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:42.685 23:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:42.685 23:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:42.685 23:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:42.685 23:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:42.685 23:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:42.943 23:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:43.201 [ 00:15:43.201 { 00:15:43.201 "name": "BaseBdev3", 00:15:43.201 "aliases": [ 00:15:43.201 "8da61d6e-7d10-4a5a-97a4-2085f28d19f6" 00:15:43.201 ], 00:15:43.201 "product_name": "Malloc disk", 00:15:43.201 "block_size": 512, 00:15:43.201 "num_blocks": 65536, 00:15:43.201 "uuid": "8da61d6e-7d10-4a5a-97a4-2085f28d19f6", 00:15:43.201 "assigned_rate_limits": { 00:15:43.201 "rw_ios_per_sec": 0, 00:15:43.202 "rw_mbytes_per_sec": 0, 00:15:43.202 "r_mbytes_per_sec": 0, 00:15:43.202 "w_mbytes_per_sec": 0 00:15:43.202 }, 00:15:43.202 "claimed": true, 00:15:43.202 "claim_type": "exclusive_write", 00:15:43.202 "zoned": false, 00:15:43.202 "supported_io_types": { 00:15:43.202 "read": true, 00:15:43.202 "write": true, 00:15:43.202 "unmap": true, 00:15:43.202 "write_zeroes": true, 00:15:43.202 "flush": true, 00:15:43.202 "reset": true, 00:15:43.202 "compare": false, 00:15:43.202 "compare_and_write": false, 00:15:43.202 "abort": true, 00:15:43.202 "nvme_admin": false, 00:15:43.202 "nvme_io": false 00:15:43.202 }, 00:15:43.202 "memory_domains": [ 00:15:43.202 { 00:15:43.202 "dma_device_id": "system", 00:15:43.202 "dma_device_type": 1 00:15:43.202 }, 00:15:43.202 { 00:15:43.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.202 "dma_device_type": 2 00:15:43.202 } 00:15:43.202 ], 00:15:43.202 "driver_specific": {} 00:15:43.202 } 00:15:43.202 ] 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.202 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.460 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.460 "name": "Existed_Raid", 00:15:43.460 "uuid": "333858d7-3619-421a-ad86-da4f32ea4df5", 00:15:43.460 "strip_size_kb": 64, 00:15:43.460 "state": "online", 00:15:43.460 "raid_level": "concat", 00:15:43.460 "superblock": true, 00:15:43.460 "num_base_bdevs": 3, 00:15:43.460 "num_base_bdevs_discovered": 3, 00:15:43.460 "num_base_bdevs_operational": 3, 00:15:43.460 "base_bdevs_list": [ 00:15:43.460 { 00:15:43.460 "name": "BaseBdev1", 00:15:43.460 "uuid": "b92350b4-1a91-4f68-b435-d0fbaaa3f16a", 00:15:43.460 "is_configured": true, 00:15:43.460 "data_offset": 2048, 00:15:43.460 "data_size": 63488 00:15:43.460 }, 00:15:43.460 { 00:15:43.460 "name": "BaseBdev2", 00:15:43.460 "uuid": "ed150ceb-6bdb-4f43-ba12-61c8f7a23c3e", 00:15:43.460 "is_configured": true, 00:15:43.460 "data_offset": 2048, 00:15:43.460 "data_size": 63488 00:15:43.460 }, 00:15:43.460 { 00:15:43.460 "name": "BaseBdev3", 00:15:43.460 "uuid": "8da61d6e-7d10-4a5a-97a4-2085f28d19f6", 00:15:43.460 "is_configured": true, 00:15:43.460 "data_offset": 2048, 00:15:43.460 "data_size": 63488 00:15:43.460 } 00:15:43.460 ] 00:15:43.460 }' 00:15:43.460 23:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.460 23:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.028 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:15:44.028 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:15:44.028 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:15:44.028 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:15:44.028 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:15:44.028 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:15:44.028 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:44.028 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:15:44.287 [2024-05-14 23:31:07.468948] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.287 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:15:44.287 "name": "Existed_Raid", 00:15:44.287 "aliases": [ 00:15:44.287 "333858d7-3619-421a-ad86-da4f32ea4df5" 00:15:44.287 ], 00:15:44.287 "product_name": "Raid Volume", 00:15:44.287 "block_size": 512, 00:15:44.287 "num_blocks": 190464, 00:15:44.287 "uuid": "333858d7-3619-421a-ad86-da4f32ea4df5", 00:15:44.287 "assigned_rate_limits": { 00:15:44.287 "rw_ios_per_sec": 0, 00:15:44.287 "rw_mbytes_per_sec": 0, 00:15:44.287 "r_mbytes_per_sec": 0, 00:15:44.287 "w_mbytes_per_sec": 0 00:15:44.287 }, 00:15:44.287 "claimed": false, 00:15:44.287 "zoned": false, 00:15:44.287 "supported_io_types": { 00:15:44.287 "read": true, 00:15:44.287 "write": true, 00:15:44.287 "unmap": true, 00:15:44.287 "write_zeroes": true, 00:15:44.287 "flush": true, 00:15:44.287 "reset": true, 00:15:44.287 "compare": false, 00:15:44.287 "compare_and_write": false, 00:15:44.287 "abort": false, 00:15:44.287 "nvme_admin": false, 00:15:44.287 "nvme_io": false 00:15:44.287 }, 00:15:44.287 "memory_domains": [ 00:15:44.287 { 00:15:44.287 "dma_device_id": "system", 00:15:44.287 "dma_device_type": 1 00:15:44.287 }, 00:15:44.287 { 00:15:44.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.287 "dma_device_type": 2 00:15:44.287 }, 00:15:44.287 { 00:15:44.287 "dma_device_id": "system", 00:15:44.287 "dma_device_type": 1 00:15:44.287 }, 00:15:44.287 { 00:15:44.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.287 "dma_device_type": 2 00:15:44.287 }, 00:15:44.287 { 00:15:44.287 "dma_device_id": "system", 00:15:44.287 "dma_device_type": 1 00:15:44.287 }, 00:15:44.287 { 00:15:44.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.287 "dma_device_type": 2 00:15:44.287 } 00:15:44.287 ], 00:15:44.287 "driver_specific": { 00:15:44.287 "raid": { 00:15:44.287 "uuid": "333858d7-3619-421a-ad86-da4f32ea4df5", 00:15:44.287 "strip_size_kb": 64, 00:15:44.287 "state": "online", 00:15:44.287 "raid_level": "concat", 00:15:44.287 "superblock": true, 00:15:44.287 "num_base_bdevs": 3, 00:15:44.287 "num_base_bdevs_discovered": 3, 00:15:44.287 "num_base_bdevs_operational": 3, 00:15:44.287 "base_bdevs_list": [ 00:15:44.287 { 00:15:44.287 "name": "BaseBdev1", 00:15:44.287 "uuid": "b92350b4-1a91-4f68-b435-d0fbaaa3f16a", 00:15:44.287 "is_configured": true, 00:15:44.287 "data_offset": 2048, 00:15:44.287 "data_size": 63488 00:15:44.287 }, 00:15:44.287 { 00:15:44.287 "name": "BaseBdev2", 00:15:44.287 "uuid": "ed150ceb-6bdb-4f43-ba12-61c8f7a23c3e", 00:15:44.287 "is_configured": true, 00:15:44.287 "data_offset": 2048, 00:15:44.287 "data_size": 63488 00:15:44.287 }, 00:15:44.287 { 00:15:44.287 "name": "BaseBdev3", 00:15:44.287 "uuid": "8da61d6e-7d10-4a5a-97a4-2085f28d19f6", 00:15:44.287 "is_configured": true, 00:15:44.287 "data_offset": 2048, 00:15:44.287 "data_size": 63488 00:15:44.287 } 00:15:44.287 ] 00:15:44.287 } 00:15:44.287 } 00:15:44.287 }' 00:15:44.287 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.287 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:15:44.287 BaseBdev2 00:15:44.287 BaseBdev3' 00:15:44.287 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:44.287 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:44.287 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:44.546 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:44.546 "name": "BaseBdev1", 00:15:44.546 "aliases": [ 00:15:44.546 "b92350b4-1a91-4f68-b435-d0fbaaa3f16a" 00:15:44.546 ], 00:15:44.546 "product_name": "Malloc disk", 00:15:44.546 "block_size": 512, 00:15:44.546 "num_blocks": 65536, 00:15:44.546 "uuid": "b92350b4-1a91-4f68-b435-d0fbaaa3f16a", 00:15:44.546 "assigned_rate_limits": { 00:15:44.546 "rw_ios_per_sec": 0, 00:15:44.546 "rw_mbytes_per_sec": 0, 00:15:44.546 "r_mbytes_per_sec": 0, 00:15:44.546 "w_mbytes_per_sec": 0 00:15:44.546 }, 00:15:44.546 "claimed": true, 00:15:44.546 "claim_type": "exclusive_write", 00:15:44.546 "zoned": false, 00:15:44.546 "supported_io_types": { 00:15:44.546 "read": true, 00:15:44.546 "write": true, 00:15:44.546 "unmap": true, 00:15:44.546 "write_zeroes": true, 00:15:44.546 "flush": true, 00:15:44.546 "reset": true, 00:15:44.546 "compare": false, 00:15:44.546 "compare_and_write": false, 00:15:44.546 "abort": true, 00:15:44.546 "nvme_admin": false, 00:15:44.546 "nvme_io": false 00:15:44.546 }, 00:15:44.546 "memory_domains": [ 00:15:44.546 { 00:15:44.546 "dma_device_id": "system", 00:15:44.546 "dma_device_type": 1 00:15:44.546 }, 00:15:44.546 { 00:15:44.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.546 "dma_device_type": 2 00:15:44.546 } 00:15:44.546 ], 00:15:44.546 "driver_specific": {} 00:15:44.546 }' 00:15:44.546 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:44.804 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:44.804 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:44.804 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:44.804 23:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:44.804 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:44.804 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:45.062 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:45.062 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.062 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:45.062 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:45.062 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:45.062 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:45.062 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:45.062 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:45.326 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:45.327 "name": "BaseBdev2", 00:15:45.327 "aliases": [ 00:15:45.327 "ed150ceb-6bdb-4f43-ba12-61c8f7a23c3e" 00:15:45.327 ], 00:15:45.327 "product_name": "Malloc disk", 00:15:45.327 "block_size": 512, 00:15:45.327 "num_blocks": 65536, 00:15:45.327 "uuid": "ed150ceb-6bdb-4f43-ba12-61c8f7a23c3e", 00:15:45.327 "assigned_rate_limits": { 00:15:45.327 "rw_ios_per_sec": 0, 00:15:45.327 "rw_mbytes_per_sec": 0, 00:15:45.327 "r_mbytes_per_sec": 0, 00:15:45.327 "w_mbytes_per_sec": 0 00:15:45.327 }, 00:15:45.327 "claimed": true, 00:15:45.327 "claim_type": "exclusive_write", 00:15:45.327 "zoned": false, 00:15:45.327 "supported_io_types": { 00:15:45.327 "read": true, 00:15:45.327 "write": true, 00:15:45.327 "unmap": true, 00:15:45.327 "write_zeroes": true, 00:15:45.327 "flush": true, 00:15:45.327 "reset": true, 00:15:45.327 "compare": false, 00:15:45.327 "compare_and_write": false, 00:15:45.327 "abort": true, 00:15:45.327 "nvme_admin": false, 00:15:45.327 "nvme_io": false 00:15:45.327 }, 00:15:45.327 "memory_domains": [ 00:15:45.327 { 00:15:45.327 "dma_device_id": "system", 00:15:45.327 "dma_device_type": 1 00:15:45.327 }, 00:15:45.327 { 00:15:45.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.327 "dma_device_type": 2 00:15:45.327 } 00:15:45.327 ], 00:15:45.327 "driver_specific": {} 00:15:45.327 }' 00:15:45.327 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:45.327 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:45.585 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:45.585 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:45.585 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:45.585 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:45.585 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:45.585 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:45.843 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.843 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:45.843 23:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:45.843 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:45.843 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:15:45.843 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:45.843 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:15:46.102 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:15:46.102 "name": "BaseBdev3", 00:15:46.102 "aliases": [ 00:15:46.102 "8da61d6e-7d10-4a5a-97a4-2085f28d19f6" 00:15:46.102 ], 00:15:46.102 "product_name": "Malloc disk", 00:15:46.102 "block_size": 512, 00:15:46.102 "num_blocks": 65536, 00:15:46.102 "uuid": "8da61d6e-7d10-4a5a-97a4-2085f28d19f6", 00:15:46.102 "assigned_rate_limits": { 00:15:46.102 "rw_ios_per_sec": 0, 00:15:46.102 "rw_mbytes_per_sec": 0, 00:15:46.102 "r_mbytes_per_sec": 0, 00:15:46.102 "w_mbytes_per_sec": 0 00:15:46.102 }, 00:15:46.102 "claimed": true, 00:15:46.102 "claim_type": "exclusive_write", 00:15:46.102 "zoned": false, 00:15:46.102 "supported_io_types": { 00:15:46.102 "read": true, 00:15:46.102 "write": true, 00:15:46.102 "unmap": true, 00:15:46.102 "write_zeroes": true, 00:15:46.102 "flush": true, 00:15:46.102 "reset": true, 00:15:46.102 "compare": false, 00:15:46.102 "compare_and_write": false, 00:15:46.102 "abort": true, 00:15:46.102 "nvme_admin": false, 00:15:46.102 "nvme_io": false 00:15:46.102 }, 00:15:46.102 "memory_domains": [ 00:15:46.102 { 00:15:46.102 "dma_device_id": "system", 00:15:46.102 "dma_device_type": 1 00:15:46.102 }, 00:15:46.102 { 00:15:46.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.102 "dma_device_type": 2 00:15:46.102 } 00:15:46.102 ], 00:15:46.102 "driver_specific": {} 00:15:46.102 }' 00:15:46.102 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:46.102 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:15:46.361 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:15:46.361 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:46.361 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:15:46.361 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.361 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:46.361 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:15:46.361 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.361 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:46.619 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:15:46.619 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:15:46.619 23:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:46.879 [2024-05-14 23:31:09.993455] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.879 [2024-05-14 23:31:09.993487] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.879 [2024-05-14 23:31:09.993529] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.879 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.138 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.138 "name": "Existed_Raid", 00:15:47.138 "uuid": "333858d7-3619-421a-ad86-da4f32ea4df5", 00:15:47.138 "strip_size_kb": 64, 00:15:47.138 "state": "offline", 00:15:47.138 "raid_level": "concat", 00:15:47.138 "superblock": true, 00:15:47.138 "num_base_bdevs": 3, 00:15:47.138 "num_base_bdevs_discovered": 2, 00:15:47.138 "num_base_bdevs_operational": 2, 00:15:47.138 "base_bdevs_list": [ 00:15:47.138 { 00:15:47.138 "name": null, 00:15:47.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.138 "is_configured": false, 00:15:47.138 "data_offset": 2048, 00:15:47.138 "data_size": 63488 00:15:47.138 }, 00:15:47.138 { 00:15:47.138 "name": "BaseBdev2", 00:15:47.138 "uuid": "ed150ceb-6bdb-4f43-ba12-61c8f7a23c3e", 00:15:47.138 "is_configured": true, 00:15:47.138 "data_offset": 2048, 00:15:47.138 "data_size": 63488 00:15:47.138 }, 00:15:47.138 { 00:15:47.138 "name": "BaseBdev3", 00:15:47.138 "uuid": "8da61d6e-7d10-4a5a-97a4-2085f28d19f6", 00:15:47.138 "is_configured": true, 00:15:47.138 "data_offset": 2048, 00:15:47.138 "data_size": 63488 00:15:47.138 } 00:15:47.138 ] 00:15:47.138 }' 00:15:47.138 23:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.138 23:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.073 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:48.073 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.073 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.073 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:48.073 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:48.073 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.073 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:48.331 [2024-05-14 23:31:11.577357] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.589 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.589 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.589 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.589 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:15:48.848 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:15:48.848 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.848 23:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:48.848 [2024-05-14 23:31:12.131818] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.848 [2024-05-14 23:31:12.131907] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:15:49.106 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.106 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.106 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.106 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:15:49.373 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:15:49.373 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:15:49.373 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:15:49.373 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:15:49.373 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:49.373 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:49.630 BaseBdev2 00:15:49.630 23:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:15:49.630 23:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:49.630 23:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:49.630 23:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:49.630 23:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:49.630 23:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:49.630 23:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.888 23:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.149 [ 00:15:50.149 { 00:15:50.149 "name": "BaseBdev2", 00:15:50.149 "aliases": [ 00:15:50.149 "00ea94b8-66e8-4bff-a341-5fddba812354" 00:15:50.149 ], 00:15:50.149 "product_name": "Malloc disk", 00:15:50.149 "block_size": 512, 00:15:50.149 "num_blocks": 65536, 00:15:50.149 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:15:50.149 "assigned_rate_limits": { 00:15:50.149 "rw_ios_per_sec": 0, 00:15:50.149 "rw_mbytes_per_sec": 0, 00:15:50.149 "r_mbytes_per_sec": 0, 00:15:50.149 "w_mbytes_per_sec": 0 00:15:50.149 }, 00:15:50.149 "claimed": false, 00:15:50.149 "zoned": false, 00:15:50.149 "supported_io_types": { 00:15:50.149 "read": true, 00:15:50.149 "write": true, 00:15:50.149 "unmap": true, 00:15:50.149 "write_zeroes": true, 00:15:50.149 "flush": true, 00:15:50.149 "reset": true, 00:15:50.149 "compare": false, 00:15:50.149 "compare_and_write": false, 00:15:50.149 "abort": true, 00:15:50.149 "nvme_admin": false, 00:15:50.149 "nvme_io": false 00:15:50.149 }, 00:15:50.149 "memory_domains": [ 00:15:50.149 { 00:15:50.149 "dma_device_id": "system", 00:15:50.149 "dma_device_type": 1 00:15:50.149 }, 00:15:50.149 { 00:15:50.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.149 "dma_device_type": 2 00:15:50.149 } 00:15:50.149 ], 00:15:50.149 "driver_specific": {} 00:15:50.149 } 00:15:50.149 ] 00:15:50.149 23:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:50.149 23:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:50.149 23:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:50.149 23:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:50.442 BaseBdev3 00:15:50.443 23:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:15:50.443 23:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:15:50.443 23:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:50.443 23:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:50.443 23:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:50.443 23:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:50.443 23:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.701 23:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:50.959 [ 00:15:50.959 { 00:15:50.959 "name": "BaseBdev3", 00:15:50.959 "aliases": [ 00:15:50.959 "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9" 00:15:50.959 ], 00:15:50.959 "product_name": "Malloc disk", 00:15:50.959 "block_size": 512, 00:15:50.959 "num_blocks": 65536, 00:15:50.959 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:15:50.959 "assigned_rate_limits": { 00:15:50.959 "rw_ios_per_sec": 0, 00:15:50.959 "rw_mbytes_per_sec": 0, 00:15:50.959 "r_mbytes_per_sec": 0, 00:15:50.959 "w_mbytes_per_sec": 0 00:15:50.959 }, 00:15:50.959 "claimed": false, 00:15:50.959 "zoned": false, 00:15:50.959 "supported_io_types": { 00:15:50.959 "read": true, 00:15:50.959 "write": true, 00:15:50.959 "unmap": true, 00:15:50.959 "write_zeroes": true, 00:15:50.959 "flush": true, 00:15:50.959 "reset": true, 00:15:50.959 "compare": false, 00:15:50.959 "compare_and_write": false, 00:15:50.959 "abort": true, 00:15:50.959 "nvme_admin": false, 00:15:50.959 "nvme_io": false 00:15:50.959 }, 00:15:50.959 "memory_domains": [ 00:15:50.959 { 00:15:50.959 "dma_device_id": "system", 00:15:50.959 "dma_device_type": 1 00:15:50.959 }, 00:15:50.959 { 00:15:50.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.959 "dma_device_type": 2 00:15:50.959 } 00:15:50.959 ], 00:15:50.959 "driver_specific": {} 00:15:50.959 } 00:15:50.959 ] 00:15:50.959 23:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:50.959 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:15:50.959 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:15:50.959 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:51.216 [2024-05-14 23:31:14.358579] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.216 [2024-05-14 23:31:14.358702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.216 [2024-05-14 23:31:14.358735] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.216 [2024-05-14 23:31:14.360554] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.216 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.473 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.473 "name": "Existed_Raid", 00:15:51.473 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:15:51.473 "strip_size_kb": 64, 00:15:51.473 "state": "configuring", 00:15:51.473 "raid_level": "concat", 00:15:51.473 "superblock": true, 00:15:51.473 "num_base_bdevs": 3, 00:15:51.473 "num_base_bdevs_discovered": 2, 00:15:51.473 "num_base_bdevs_operational": 3, 00:15:51.473 "base_bdevs_list": [ 00:15:51.473 { 00:15:51.473 "name": "BaseBdev1", 00:15:51.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.473 "is_configured": false, 00:15:51.473 "data_offset": 0, 00:15:51.473 "data_size": 0 00:15:51.473 }, 00:15:51.473 { 00:15:51.473 "name": "BaseBdev2", 00:15:51.473 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:15:51.473 "is_configured": true, 00:15:51.473 "data_offset": 2048, 00:15:51.473 "data_size": 63488 00:15:51.473 }, 00:15:51.473 { 00:15:51.473 "name": "BaseBdev3", 00:15:51.473 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:15:51.473 "is_configured": true, 00:15:51.473 "data_offset": 2048, 00:15:51.473 "data_size": 63488 00:15:51.473 } 00:15:51.473 ] 00:15:51.473 }' 00:15:51.473 23:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.473 23:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:52.406 [2024-05-14 23:31:15.654706] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.406 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.976 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.976 "name": "Existed_Raid", 00:15:52.976 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:15:52.976 "strip_size_kb": 64, 00:15:52.976 "state": "configuring", 00:15:52.976 "raid_level": "concat", 00:15:52.976 "superblock": true, 00:15:52.976 "num_base_bdevs": 3, 00:15:52.976 "num_base_bdevs_discovered": 1, 00:15:52.976 "num_base_bdevs_operational": 3, 00:15:52.976 "base_bdevs_list": [ 00:15:52.976 { 00:15:52.976 "name": "BaseBdev1", 00:15:52.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.976 "is_configured": false, 00:15:52.976 "data_offset": 0, 00:15:52.976 "data_size": 0 00:15:52.976 }, 00:15:52.976 { 00:15:52.976 "name": null, 00:15:52.976 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:15:52.976 "is_configured": false, 00:15:52.976 "data_offset": 2048, 00:15:52.976 "data_size": 63488 00:15:52.976 }, 00:15:52.976 { 00:15:52.976 "name": "BaseBdev3", 00:15:52.976 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:15:52.976 "is_configured": true, 00:15:52.976 "data_offset": 2048, 00:15:52.976 "data_size": 63488 00:15:52.976 } 00:15:52.976 ] 00:15:52.976 }' 00:15:52.976 23:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.976 23:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.544 23:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.544 23:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:53.803 23:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:15:53.803 23:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.061 BaseBdev1 00:15:54.061 [2024-05-14 23:31:17.280355] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.061 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:15:54.061 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:54.061 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:54.061 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:54.061 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:54.061 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:54.061 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:54.320 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.579 [ 00:15:54.579 { 00:15:54.579 "name": "BaseBdev1", 00:15:54.579 "aliases": [ 00:15:54.579 "36fa3318-3c2e-417b-84d4-45e265daaee2" 00:15:54.579 ], 00:15:54.579 "product_name": "Malloc disk", 00:15:54.579 "block_size": 512, 00:15:54.579 "num_blocks": 65536, 00:15:54.579 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:15:54.579 "assigned_rate_limits": { 00:15:54.579 "rw_ios_per_sec": 0, 00:15:54.579 "rw_mbytes_per_sec": 0, 00:15:54.579 "r_mbytes_per_sec": 0, 00:15:54.579 "w_mbytes_per_sec": 0 00:15:54.579 }, 00:15:54.579 "claimed": true, 00:15:54.579 "claim_type": "exclusive_write", 00:15:54.579 "zoned": false, 00:15:54.579 "supported_io_types": { 00:15:54.579 "read": true, 00:15:54.579 "write": true, 00:15:54.579 "unmap": true, 00:15:54.579 "write_zeroes": true, 00:15:54.579 "flush": true, 00:15:54.579 "reset": true, 00:15:54.579 "compare": false, 00:15:54.579 "compare_and_write": false, 00:15:54.579 "abort": true, 00:15:54.579 "nvme_admin": false, 00:15:54.579 "nvme_io": false 00:15:54.579 }, 00:15:54.579 "memory_domains": [ 00:15:54.579 { 00:15:54.579 "dma_device_id": "system", 00:15:54.579 "dma_device_type": 1 00:15:54.579 }, 00:15:54.579 { 00:15:54.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.579 "dma_device_type": 2 00:15:54.579 } 00:15:54.579 ], 00:15:54.579 "driver_specific": {} 00:15:54.579 } 00:15:54.579 ] 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.579 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.837 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.837 "name": "Existed_Raid", 00:15:54.837 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:15:54.837 "strip_size_kb": 64, 00:15:54.837 "state": "configuring", 00:15:54.837 "raid_level": "concat", 00:15:54.837 "superblock": true, 00:15:54.837 "num_base_bdevs": 3, 00:15:54.837 "num_base_bdevs_discovered": 2, 00:15:54.837 "num_base_bdevs_operational": 3, 00:15:54.837 "base_bdevs_list": [ 00:15:54.837 { 00:15:54.837 "name": "BaseBdev1", 00:15:54.837 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:15:54.837 "is_configured": true, 00:15:54.837 "data_offset": 2048, 00:15:54.837 "data_size": 63488 00:15:54.837 }, 00:15:54.838 { 00:15:54.838 "name": null, 00:15:54.838 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:15:54.838 "is_configured": false, 00:15:54.838 "data_offset": 2048, 00:15:54.838 "data_size": 63488 00:15:54.838 }, 00:15:54.838 { 00:15:54.838 "name": "BaseBdev3", 00:15:54.838 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:15:54.838 "is_configured": true, 00:15:54.838 "data_offset": 2048, 00:15:54.838 "data_size": 63488 00:15:54.838 } 00:15:54.838 ] 00:15:54.838 }' 00:15:54.838 23:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.838 23:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.404 23:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.404 23:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:55.674 23:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:55.674 23:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:55.932 [2024-05-14 23:31:19.060665] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.932 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.191 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.191 "name": "Existed_Raid", 00:15:56.191 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:15:56.191 "strip_size_kb": 64, 00:15:56.191 "state": "configuring", 00:15:56.191 "raid_level": "concat", 00:15:56.191 "superblock": true, 00:15:56.191 "num_base_bdevs": 3, 00:15:56.191 "num_base_bdevs_discovered": 1, 00:15:56.191 "num_base_bdevs_operational": 3, 00:15:56.191 "base_bdevs_list": [ 00:15:56.191 { 00:15:56.191 "name": "BaseBdev1", 00:15:56.191 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:15:56.191 "is_configured": true, 00:15:56.191 "data_offset": 2048, 00:15:56.191 "data_size": 63488 00:15:56.191 }, 00:15:56.191 { 00:15:56.191 "name": null, 00:15:56.191 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:15:56.191 "is_configured": false, 00:15:56.191 "data_offset": 2048, 00:15:56.191 "data_size": 63488 00:15:56.191 }, 00:15:56.191 { 00:15:56.191 "name": null, 00:15:56.191 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:15:56.191 "is_configured": false, 00:15:56.191 "data_offset": 2048, 00:15:56.191 "data_size": 63488 00:15:56.191 } 00:15:56.191 ] 00:15:56.191 }' 00:15:56.191 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.191 23:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.757 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.757 23:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:57.015 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:15:57.015 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:57.274 [2024-05-14 23:31:20.420832] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.274 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.531 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.531 "name": "Existed_Raid", 00:15:57.531 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:15:57.531 "strip_size_kb": 64, 00:15:57.531 "state": "configuring", 00:15:57.531 "raid_level": "concat", 00:15:57.531 "superblock": true, 00:15:57.531 "num_base_bdevs": 3, 00:15:57.531 "num_base_bdevs_discovered": 2, 00:15:57.531 "num_base_bdevs_operational": 3, 00:15:57.531 "base_bdevs_list": [ 00:15:57.531 { 00:15:57.531 "name": "BaseBdev1", 00:15:57.531 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:15:57.531 "is_configured": true, 00:15:57.531 "data_offset": 2048, 00:15:57.531 "data_size": 63488 00:15:57.531 }, 00:15:57.532 { 00:15:57.532 "name": null, 00:15:57.532 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:15:57.532 "is_configured": false, 00:15:57.532 "data_offset": 2048, 00:15:57.532 "data_size": 63488 00:15:57.532 }, 00:15:57.532 { 00:15:57.532 "name": "BaseBdev3", 00:15:57.532 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:15:57.532 "is_configured": true, 00:15:57.532 "data_offset": 2048, 00:15:57.532 "data_size": 63488 00:15:57.532 } 00:15:57.532 ] 00:15:57.532 }' 00:15:57.532 23:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.532 23:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.098 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.098 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:58.355 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:15:58.355 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:58.613 [2024-05-14 23:31:21.721028] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.613 23:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.871 23:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.871 "name": "Existed_Raid", 00:15:58.871 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:15:58.871 "strip_size_kb": 64, 00:15:58.871 "state": "configuring", 00:15:58.871 "raid_level": "concat", 00:15:58.871 "superblock": true, 00:15:58.871 "num_base_bdevs": 3, 00:15:58.871 "num_base_bdevs_discovered": 1, 00:15:58.871 "num_base_bdevs_operational": 3, 00:15:58.871 "base_bdevs_list": [ 00:15:58.871 { 00:15:58.871 "name": null, 00:15:58.871 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:15:58.871 "is_configured": false, 00:15:58.871 "data_offset": 2048, 00:15:58.871 "data_size": 63488 00:15:58.871 }, 00:15:58.871 { 00:15:58.871 "name": null, 00:15:58.871 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:15:58.871 "is_configured": false, 00:15:58.871 "data_offset": 2048, 00:15:58.871 "data_size": 63488 00:15:58.871 }, 00:15:58.871 { 00:15:58.871 "name": "BaseBdev3", 00:15:58.871 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:15:58.871 "is_configured": true, 00:15:58.871 "data_offset": 2048, 00:15:58.871 "data_size": 63488 00:15:58.871 } 00:15:58.871 ] 00:15:58.871 }' 00:15:58.871 23:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.871 23:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.438 23:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.438 23:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:59.694 23:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:15:59.694 23:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:59.951 [2024-05-14 23:31:23.152981] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.951 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.517 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.517 "name": "Existed_Raid", 00:16:00.517 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:16:00.517 "strip_size_kb": 64, 00:16:00.517 "state": "configuring", 00:16:00.517 "raid_level": "concat", 00:16:00.517 "superblock": true, 00:16:00.517 "num_base_bdevs": 3, 00:16:00.517 "num_base_bdevs_discovered": 2, 00:16:00.517 "num_base_bdevs_operational": 3, 00:16:00.517 "base_bdevs_list": [ 00:16:00.517 { 00:16:00.517 "name": null, 00:16:00.517 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:16:00.517 "is_configured": false, 00:16:00.517 "data_offset": 2048, 00:16:00.517 "data_size": 63488 00:16:00.517 }, 00:16:00.517 { 00:16:00.517 "name": "BaseBdev2", 00:16:00.517 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:16:00.517 "is_configured": true, 00:16:00.517 "data_offset": 2048, 00:16:00.517 "data_size": 63488 00:16:00.517 }, 00:16:00.517 { 00:16:00.517 "name": "BaseBdev3", 00:16:00.517 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:16:00.517 "is_configured": true, 00:16:00.517 "data_offset": 2048, 00:16:00.517 "data_size": 63488 00:16:00.517 } 00:16:00.517 ] 00:16:00.517 }' 00:16:00.517 23:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.517 23:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.085 23:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:01.085 23:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.345 23:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:16:01.345 23:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:01.345 23:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.604 23:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 36fa3318-3c2e-417b-84d4-45e265daaee2 00:16:01.604 [2024-05-14 23:31:24.856784] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:01.604 [2024-05-14 23:31:24.856964] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:16:01.604 [2024-05-14 23:31:24.856981] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:01.604 [2024-05-14 23:31:24.857058] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:01.604 NewBaseBdev 00:16:01.604 [2024-05-14 23:31:24.857525] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:16:01.604 [2024-05-14 23:31:24.857544] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:16:01.604 [2024-05-14 23:31:24.857643] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.604 23:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:16:01.604 23:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:01.604 23:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:01.604 23:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:01.604 23:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:01.604 23:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:01.604 23:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.883 23:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:02.143 [ 00:16:02.143 { 00:16:02.143 "name": "NewBaseBdev", 00:16:02.143 "aliases": [ 00:16:02.143 "36fa3318-3c2e-417b-84d4-45e265daaee2" 00:16:02.143 ], 00:16:02.143 "product_name": "Malloc disk", 00:16:02.143 "block_size": 512, 00:16:02.143 "num_blocks": 65536, 00:16:02.143 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:16:02.143 "assigned_rate_limits": { 00:16:02.143 "rw_ios_per_sec": 0, 00:16:02.143 "rw_mbytes_per_sec": 0, 00:16:02.143 "r_mbytes_per_sec": 0, 00:16:02.143 "w_mbytes_per_sec": 0 00:16:02.143 }, 00:16:02.143 "claimed": true, 00:16:02.143 "claim_type": "exclusive_write", 00:16:02.143 "zoned": false, 00:16:02.143 "supported_io_types": { 00:16:02.143 "read": true, 00:16:02.143 "write": true, 00:16:02.143 "unmap": true, 00:16:02.143 "write_zeroes": true, 00:16:02.143 "flush": true, 00:16:02.143 "reset": true, 00:16:02.143 "compare": false, 00:16:02.143 "compare_and_write": false, 00:16:02.143 "abort": true, 00:16:02.143 "nvme_admin": false, 00:16:02.143 "nvme_io": false 00:16:02.143 }, 00:16:02.143 "memory_domains": [ 00:16:02.143 { 00:16:02.143 "dma_device_id": "system", 00:16:02.143 "dma_device_type": 1 00:16:02.143 }, 00:16:02.143 { 00:16:02.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.143 "dma_device_type": 2 00:16:02.143 } 00:16:02.143 ], 00:16:02.143 "driver_specific": {} 00:16:02.143 } 00:16:02.143 ] 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.143 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.403 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.403 "name": "Existed_Raid", 00:16:02.403 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:16:02.403 "strip_size_kb": 64, 00:16:02.403 "state": "online", 00:16:02.403 "raid_level": "concat", 00:16:02.403 "superblock": true, 00:16:02.403 "num_base_bdevs": 3, 00:16:02.403 "num_base_bdevs_discovered": 3, 00:16:02.403 "num_base_bdevs_operational": 3, 00:16:02.403 "base_bdevs_list": [ 00:16:02.403 { 00:16:02.403 "name": "NewBaseBdev", 00:16:02.403 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:16:02.403 "is_configured": true, 00:16:02.403 "data_offset": 2048, 00:16:02.403 "data_size": 63488 00:16:02.403 }, 00:16:02.403 { 00:16:02.403 "name": "BaseBdev2", 00:16:02.403 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:16:02.403 "is_configured": true, 00:16:02.403 "data_offset": 2048, 00:16:02.403 "data_size": 63488 00:16:02.403 }, 00:16:02.403 { 00:16:02.403 "name": "BaseBdev3", 00:16:02.403 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:16:02.403 "is_configured": true, 00:16:02.403 "data_offset": 2048, 00:16:02.403 "data_size": 63488 00:16:02.403 } 00:16:02.403 ] 00:16:02.403 }' 00:16:02.403 23:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.403 23:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:03.338 [2024-05-14 23:31:26.457261] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:03.338 "name": "Existed_Raid", 00:16:03.338 "aliases": [ 00:16:03.338 "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc" 00:16:03.338 ], 00:16:03.338 "product_name": "Raid Volume", 00:16:03.338 "block_size": 512, 00:16:03.338 "num_blocks": 190464, 00:16:03.338 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:16:03.338 "assigned_rate_limits": { 00:16:03.338 "rw_ios_per_sec": 0, 00:16:03.338 "rw_mbytes_per_sec": 0, 00:16:03.338 "r_mbytes_per_sec": 0, 00:16:03.338 "w_mbytes_per_sec": 0 00:16:03.338 }, 00:16:03.338 "claimed": false, 00:16:03.338 "zoned": false, 00:16:03.338 "supported_io_types": { 00:16:03.338 "read": true, 00:16:03.338 "write": true, 00:16:03.338 "unmap": true, 00:16:03.338 "write_zeroes": true, 00:16:03.338 "flush": true, 00:16:03.338 "reset": true, 00:16:03.338 "compare": false, 00:16:03.338 "compare_and_write": false, 00:16:03.338 "abort": false, 00:16:03.338 "nvme_admin": false, 00:16:03.338 "nvme_io": false 00:16:03.338 }, 00:16:03.338 "memory_domains": [ 00:16:03.338 { 00:16:03.338 "dma_device_id": "system", 00:16:03.338 "dma_device_type": 1 00:16:03.338 }, 00:16:03.338 { 00:16:03.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.338 "dma_device_type": 2 00:16:03.338 }, 00:16:03.338 { 00:16:03.338 "dma_device_id": "system", 00:16:03.338 "dma_device_type": 1 00:16:03.338 }, 00:16:03.338 { 00:16:03.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.338 "dma_device_type": 2 00:16:03.338 }, 00:16:03.338 { 00:16:03.338 "dma_device_id": "system", 00:16:03.338 "dma_device_type": 1 00:16:03.338 }, 00:16:03.338 { 00:16:03.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.338 "dma_device_type": 2 00:16:03.338 } 00:16:03.338 ], 00:16:03.338 "driver_specific": { 00:16:03.338 "raid": { 00:16:03.338 "uuid": "3d0b2706-b09e-4af1-89eb-accb4cbf6bdc", 00:16:03.338 "strip_size_kb": 64, 00:16:03.338 "state": "online", 00:16:03.338 "raid_level": "concat", 00:16:03.338 "superblock": true, 00:16:03.338 "num_base_bdevs": 3, 00:16:03.338 "num_base_bdevs_discovered": 3, 00:16:03.338 "num_base_bdevs_operational": 3, 00:16:03.338 "base_bdevs_list": [ 00:16:03.338 { 00:16:03.338 "name": "NewBaseBdev", 00:16:03.338 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:16:03.338 "is_configured": true, 00:16:03.338 "data_offset": 2048, 00:16:03.338 "data_size": 63488 00:16:03.338 }, 00:16:03.338 { 00:16:03.338 "name": "BaseBdev2", 00:16:03.338 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:16:03.338 "is_configured": true, 00:16:03.338 "data_offset": 2048, 00:16:03.338 "data_size": 63488 00:16:03.338 }, 00:16:03.338 { 00:16:03.338 "name": "BaseBdev3", 00:16:03.338 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:16:03.338 "is_configured": true, 00:16:03.338 "data_offset": 2048, 00:16:03.338 "data_size": 63488 00:16:03.338 } 00:16:03.338 ] 00:16:03.338 } 00:16:03.338 } 00:16:03.338 }' 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:16:03.338 BaseBdev2 00:16:03.338 BaseBdev3' 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:03.338 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:03.597 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:03.597 "name": "NewBaseBdev", 00:16:03.597 "aliases": [ 00:16:03.597 "36fa3318-3c2e-417b-84d4-45e265daaee2" 00:16:03.597 ], 00:16:03.597 "product_name": "Malloc disk", 00:16:03.597 "block_size": 512, 00:16:03.597 "num_blocks": 65536, 00:16:03.597 "uuid": "36fa3318-3c2e-417b-84d4-45e265daaee2", 00:16:03.597 "assigned_rate_limits": { 00:16:03.597 "rw_ios_per_sec": 0, 00:16:03.597 "rw_mbytes_per_sec": 0, 00:16:03.597 "r_mbytes_per_sec": 0, 00:16:03.597 "w_mbytes_per_sec": 0 00:16:03.597 }, 00:16:03.597 "claimed": true, 00:16:03.597 "claim_type": "exclusive_write", 00:16:03.597 "zoned": false, 00:16:03.597 "supported_io_types": { 00:16:03.597 "read": true, 00:16:03.597 "write": true, 00:16:03.597 "unmap": true, 00:16:03.597 "write_zeroes": true, 00:16:03.597 "flush": true, 00:16:03.597 "reset": true, 00:16:03.597 "compare": false, 00:16:03.597 "compare_and_write": false, 00:16:03.597 "abort": true, 00:16:03.597 "nvme_admin": false, 00:16:03.597 "nvme_io": false 00:16:03.597 }, 00:16:03.597 "memory_domains": [ 00:16:03.597 { 00:16:03.597 "dma_device_id": "system", 00:16:03.597 "dma_device_type": 1 00:16:03.597 }, 00:16:03.597 { 00:16:03.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.597 "dma_device_type": 2 00:16:03.597 } 00:16:03.597 ], 00:16:03.597 "driver_specific": {} 00:16:03.597 }' 00:16:03.597 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:03.597 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:03.597 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:03.597 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:03.856 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:03.856 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:03.856 23:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:03.856 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:03.856 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:03.856 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:04.116 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:04.116 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:04.116 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:04.116 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:04.116 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:04.375 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:04.375 "name": "BaseBdev2", 00:16:04.375 "aliases": [ 00:16:04.375 "00ea94b8-66e8-4bff-a341-5fddba812354" 00:16:04.375 ], 00:16:04.375 "product_name": "Malloc disk", 00:16:04.375 "block_size": 512, 00:16:04.375 "num_blocks": 65536, 00:16:04.375 "uuid": "00ea94b8-66e8-4bff-a341-5fddba812354", 00:16:04.375 "assigned_rate_limits": { 00:16:04.375 "rw_ios_per_sec": 0, 00:16:04.375 "rw_mbytes_per_sec": 0, 00:16:04.375 "r_mbytes_per_sec": 0, 00:16:04.375 "w_mbytes_per_sec": 0 00:16:04.375 }, 00:16:04.375 "claimed": true, 00:16:04.375 "claim_type": "exclusive_write", 00:16:04.375 "zoned": false, 00:16:04.375 "supported_io_types": { 00:16:04.375 "read": true, 00:16:04.375 "write": true, 00:16:04.375 "unmap": true, 00:16:04.375 "write_zeroes": true, 00:16:04.375 "flush": true, 00:16:04.375 "reset": true, 00:16:04.375 "compare": false, 00:16:04.375 "compare_and_write": false, 00:16:04.375 "abort": true, 00:16:04.375 "nvme_admin": false, 00:16:04.375 "nvme_io": false 00:16:04.375 }, 00:16:04.375 "memory_domains": [ 00:16:04.375 { 00:16:04.375 "dma_device_id": "system", 00:16:04.375 "dma_device_type": 1 00:16:04.375 }, 00:16:04.375 { 00:16:04.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.375 "dma_device_type": 2 00:16:04.375 } 00:16:04.375 ], 00:16:04.375 "driver_specific": {} 00:16:04.375 }' 00:16:04.375 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:04.375 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:04.375 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:04.634 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:04.635 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:04.635 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:04.635 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:04.635 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:04.635 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:04.635 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:04.893 23:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:04.893 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:04.893 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:04.893 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:04.893 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:05.220 "name": "BaseBdev3", 00:16:05.220 "aliases": [ 00:16:05.220 "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9" 00:16:05.220 ], 00:16:05.220 "product_name": "Malloc disk", 00:16:05.220 "block_size": 512, 00:16:05.220 "num_blocks": 65536, 00:16:05.220 "uuid": "e2a1c08e-2fe8-448f-b312-3f5a3cd144f9", 00:16:05.220 "assigned_rate_limits": { 00:16:05.220 "rw_ios_per_sec": 0, 00:16:05.220 "rw_mbytes_per_sec": 0, 00:16:05.220 "r_mbytes_per_sec": 0, 00:16:05.220 "w_mbytes_per_sec": 0 00:16:05.220 }, 00:16:05.220 "claimed": true, 00:16:05.220 "claim_type": "exclusive_write", 00:16:05.220 "zoned": false, 00:16:05.220 "supported_io_types": { 00:16:05.220 "read": true, 00:16:05.220 "write": true, 00:16:05.220 "unmap": true, 00:16:05.220 "write_zeroes": true, 00:16:05.220 "flush": true, 00:16:05.220 "reset": true, 00:16:05.220 "compare": false, 00:16:05.220 "compare_and_write": false, 00:16:05.220 "abort": true, 00:16:05.220 "nvme_admin": false, 00:16:05.220 "nvme_io": false 00:16:05.220 }, 00:16:05.220 "memory_domains": [ 00:16:05.220 { 00:16:05.220 "dma_device_id": "system", 00:16:05.220 "dma_device_type": 1 00:16:05.220 }, 00:16:05.220 { 00:16:05.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.220 "dma_device_type": 2 00:16:05.220 } 00:16:05.220 ], 00:16:05.220 "driver_specific": {} 00:16:05.220 }' 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:05.220 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:05.478 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.478 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:05.478 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:05.478 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:05.478 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:05.736 [2024-05-14 23:31:28.877413] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.736 [2024-05-14 23:31:28.877458] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.736 [2024-05-14 23:31:28.877528] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.736 [2024-05-14 23:31:28.877570] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.736 [2024-05-14 23:31:28.877581] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 59900 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 59900 ']' 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 59900 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59900 00:16:05.736 killing process with pid 59900 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59900' 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 59900 00:16:05.736 23:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 59900 00:16:05.736 [2024-05-14 23:31:28.927948] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.994 [2024-05-14 23:31:29.230392] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.370 ************************************ 00:16:07.370 END TEST raid_state_function_test_sb 00:16:07.370 ************************************ 00:16:07.370 23:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:16:07.370 00:16:07.370 real 0m31.754s 00:16:07.370 user 0m59.666s 00:16:07.370 sys 0m2.985s 00:16:07.370 23:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:07.370 23:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.370 23:31:30 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:07.370 23:31:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:07.370 23:31:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:07.370 23:31:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.370 ************************************ 00:16:07.370 START TEST raid_superblock_test 00:16:07.370 ************************************ 00:16:07.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60908 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60908 /var/tmp/spdk-raid.sock 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 60908 ']' 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:07.370 23:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.629 [2024-05-14 23:31:30.777115] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:16:07.629 [2024-05-14 23:31:30.777348] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60908 ] 00:16:07.887 [2024-05-14 23:31:30.949968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.146 [2024-05-14 23:31:31.202923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.146 [2024-05-14 23:31:31.432500] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:08.406 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:08.670 malloc1 00:16:08.670 23:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:08.954 [2024-05-14 23:31:32.051104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:08.954 [2024-05-14 23:31:32.051507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.954 [2024-05-14 23:31:32.051596] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:16:08.955 [2024-05-14 23:31:32.051647] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.955 [2024-05-14 23:31:32.053454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.955 [2024-05-14 23:31:32.053520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:08.955 pt1 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:08.955 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:09.218 malloc2 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.218 [2024-05-14 23:31:32.486375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.218 [2024-05-14 23:31:32.486483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.218 [2024-05-14 23:31:32.486536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:16:09.218 [2024-05-14 23:31:32.486578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.218 [2024-05-14 23:31:32.488693] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.218 [2024-05-14 23:31:32.488750] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.218 pt2 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:09.218 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:09.476 malloc3 00:16:09.476 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:09.735 [2024-05-14 23:31:32.921026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:09.735 [2024-05-14 23:31:32.921388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.735 [2024-05-14 23:31:32.921487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:16:09.735 [2024-05-14 23:31:32.921558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.735 pt3 00:16:09.735 [2024-05-14 23:31:32.923284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.735 [2024-05-14 23:31:32.923342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:09.735 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:09.735 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:09.735 23:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:09.993 [2024-05-14 23:31:33.137165] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:09.993 [2024-05-14 23:31:33.138875] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.993 [2024-05-14 23:31:33.138938] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:09.993 [2024-05-14 23:31:33.139096] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:16:09.993 [2024-05-14 23:31:33.139112] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.993 [2024-05-14 23:31:33.139256] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:09.993 [2024-05-14 23:31:33.139575] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:16:09.993 [2024-05-14 23:31:33.139602] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:16:09.993 [2024-05-14 23:31:33.139728] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.993 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.252 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.252 "name": "raid_bdev1", 00:16:10.252 "uuid": "ed619aff-c2c3-46cd-8d6b-a159f798e007", 00:16:10.252 "strip_size_kb": 64, 00:16:10.252 "state": "online", 00:16:10.252 "raid_level": "concat", 00:16:10.252 "superblock": true, 00:16:10.252 "num_base_bdevs": 3, 00:16:10.252 "num_base_bdevs_discovered": 3, 00:16:10.252 "num_base_bdevs_operational": 3, 00:16:10.252 "base_bdevs_list": [ 00:16:10.252 { 00:16:10.252 "name": "pt1", 00:16:10.252 "uuid": "2fb7cbca-1b46-551c-85ff-80f4e81aec0e", 00:16:10.252 "is_configured": true, 00:16:10.252 "data_offset": 2048, 00:16:10.252 "data_size": 63488 00:16:10.252 }, 00:16:10.252 { 00:16:10.252 "name": "pt2", 00:16:10.252 "uuid": "049e51ae-3c61-519a-a3aa-d7a18893b2ba", 00:16:10.252 "is_configured": true, 00:16:10.252 "data_offset": 2048, 00:16:10.252 "data_size": 63488 00:16:10.252 }, 00:16:10.252 { 00:16:10.252 "name": "pt3", 00:16:10.252 "uuid": "2a7501fa-1085-5340-a9ab-76050caf3ee8", 00:16:10.252 "is_configured": true, 00:16:10.252 "data_offset": 2048, 00:16:10.252 "data_size": 63488 00:16:10.252 } 00:16:10.252 ] 00:16:10.252 }' 00:16:10.252 23:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.252 23:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.819 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:10.819 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:16:10.819 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:10.819 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:10.819 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:10.819 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:10.819 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:10.819 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:11.077 [2024-05-14 23:31:34.265453] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.077 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:11.077 "name": "raid_bdev1", 00:16:11.077 "aliases": [ 00:16:11.077 "ed619aff-c2c3-46cd-8d6b-a159f798e007" 00:16:11.077 ], 00:16:11.077 "product_name": "Raid Volume", 00:16:11.077 "block_size": 512, 00:16:11.077 "num_blocks": 190464, 00:16:11.077 "uuid": "ed619aff-c2c3-46cd-8d6b-a159f798e007", 00:16:11.077 "assigned_rate_limits": { 00:16:11.077 "rw_ios_per_sec": 0, 00:16:11.077 "rw_mbytes_per_sec": 0, 00:16:11.077 "r_mbytes_per_sec": 0, 00:16:11.077 "w_mbytes_per_sec": 0 00:16:11.077 }, 00:16:11.077 "claimed": false, 00:16:11.077 "zoned": false, 00:16:11.077 "supported_io_types": { 00:16:11.077 "read": true, 00:16:11.077 "write": true, 00:16:11.077 "unmap": true, 00:16:11.077 "write_zeroes": true, 00:16:11.077 "flush": true, 00:16:11.077 "reset": true, 00:16:11.077 "compare": false, 00:16:11.077 "compare_and_write": false, 00:16:11.077 "abort": false, 00:16:11.077 "nvme_admin": false, 00:16:11.077 "nvme_io": false 00:16:11.077 }, 00:16:11.077 "memory_domains": [ 00:16:11.077 { 00:16:11.077 "dma_device_id": "system", 00:16:11.077 "dma_device_type": 1 00:16:11.077 }, 00:16:11.077 { 00:16:11.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.077 "dma_device_type": 2 00:16:11.077 }, 00:16:11.077 { 00:16:11.077 "dma_device_id": "system", 00:16:11.077 "dma_device_type": 1 00:16:11.077 }, 00:16:11.077 { 00:16:11.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.077 "dma_device_type": 2 00:16:11.077 }, 00:16:11.077 { 00:16:11.077 "dma_device_id": "system", 00:16:11.077 "dma_device_type": 1 00:16:11.077 }, 00:16:11.077 { 00:16:11.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.078 "dma_device_type": 2 00:16:11.078 } 00:16:11.078 ], 00:16:11.078 "driver_specific": { 00:16:11.078 "raid": { 00:16:11.078 "uuid": "ed619aff-c2c3-46cd-8d6b-a159f798e007", 00:16:11.078 "strip_size_kb": 64, 00:16:11.078 "state": "online", 00:16:11.078 "raid_level": "concat", 00:16:11.078 "superblock": true, 00:16:11.078 "num_base_bdevs": 3, 00:16:11.078 "num_base_bdevs_discovered": 3, 00:16:11.078 "num_base_bdevs_operational": 3, 00:16:11.078 "base_bdevs_list": [ 00:16:11.078 { 00:16:11.078 "name": "pt1", 00:16:11.078 "uuid": "2fb7cbca-1b46-551c-85ff-80f4e81aec0e", 00:16:11.078 "is_configured": true, 00:16:11.078 "data_offset": 2048, 00:16:11.078 "data_size": 63488 00:16:11.078 }, 00:16:11.078 { 00:16:11.078 "name": "pt2", 00:16:11.078 "uuid": "049e51ae-3c61-519a-a3aa-d7a18893b2ba", 00:16:11.078 "is_configured": true, 00:16:11.078 "data_offset": 2048, 00:16:11.078 "data_size": 63488 00:16:11.078 }, 00:16:11.078 { 00:16:11.078 "name": "pt3", 00:16:11.078 "uuid": "2a7501fa-1085-5340-a9ab-76050caf3ee8", 00:16:11.078 "is_configured": true, 00:16:11.078 "data_offset": 2048, 00:16:11.078 "data_size": 63488 00:16:11.078 } 00:16:11.078 ] 00:16:11.078 } 00:16:11.078 } 00:16:11.078 }' 00:16:11.078 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.078 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:16:11.078 pt2 00:16:11.078 pt3' 00:16:11.078 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:11.078 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:11.078 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:11.336 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:11.336 "name": "pt1", 00:16:11.336 "aliases": [ 00:16:11.336 "2fb7cbca-1b46-551c-85ff-80f4e81aec0e" 00:16:11.336 ], 00:16:11.336 "product_name": "passthru", 00:16:11.336 "block_size": 512, 00:16:11.336 "num_blocks": 65536, 00:16:11.336 "uuid": "2fb7cbca-1b46-551c-85ff-80f4e81aec0e", 00:16:11.336 "assigned_rate_limits": { 00:16:11.336 "rw_ios_per_sec": 0, 00:16:11.336 "rw_mbytes_per_sec": 0, 00:16:11.336 "r_mbytes_per_sec": 0, 00:16:11.336 "w_mbytes_per_sec": 0 00:16:11.336 }, 00:16:11.336 "claimed": true, 00:16:11.336 "claim_type": "exclusive_write", 00:16:11.336 "zoned": false, 00:16:11.336 "supported_io_types": { 00:16:11.336 "read": true, 00:16:11.336 "write": true, 00:16:11.336 "unmap": true, 00:16:11.336 "write_zeroes": true, 00:16:11.336 "flush": true, 00:16:11.336 "reset": true, 00:16:11.336 "compare": false, 00:16:11.336 "compare_and_write": false, 00:16:11.336 "abort": true, 00:16:11.336 "nvme_admin": false, 00:16:11.336 "nvme_io": false 00:16:11.336 }, 00:16:11.336 "memory_domains": [ 00:16:11.336 { 00:16:11.336 "dma_device_id": "system", 00:16:11.336 "dma_device_type": 1 00:16:11.336 }, 00:16:11.336 { 00:16:11.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.336 "dma_device_type": 2 00:16:11.336 } 00:16:11.336 ], 00:16:11.336 "driver_specific": { 00:16:11.336 "passthru": { 00:16:11.336 "name": "pt1", 00:16:11.336 "base_bdev_name": "malloc1" 00:16:11.336 } 00:16:11.336 } 00:16:11.336 }' 00:16:11.336 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:11.336 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:11.595 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:11.853 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:11.853 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:11.853 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:11.853 23:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:11.853 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:11.853 "name": "pt2", 00:16:11.853 "aliases": [ 00:16:11.853 "049e51ae-3c61-519a-a3aa-d7a18893b2ba" 00:16:11.853 ], 00:16:11.853 "product_name": "passthru", 00:16:11.853 "block_size": 512, 00:16:11.853 "num_blocks": 65536, 00:16:11.853 "uuid": "049e51ae-3c61-519a-a3aa-d7a18893b2ba", 00:16:11.853 "assigned_rate_limits": { 00:16:11.853 "rw_ios_per_sec": 0, 00:16:11.853 "rw_mbytes_per_sec": 0, 00:16:11.853 "r_mbytes_per_sec": 0, 00:16:11.853 "w_mbytes_per_sec": 0 00:16:11.853 }, 00:16:11.853 "claimed": true, 00:16:11.853 "claim_type": "exclusive_write", 00:16:11.853 "zoned": false, 00:16:11.853 "supported_io_types": { 00:16:11.853 "read": true, 00:16:11.853 "write": true, 00:16:11.853 "unmap": true, 00:16:11.853 "write_zeroes": true, 00:16:11.853 "flush": true, 00:16:11.853 "reset": true, 00:16:11.853 "compare": false, 00:16:11.853 "compare_and_write": false, 00:16:11.853 "abort": true, 00:16:11.853 "nvme_admin": false, 00:16:11.853 "nvme_io": false 00:16:11.853 }, 00:16:11.853 "memory_domains": [ 00:16:11.853 { 00:16:11.853 "dma_device_id": "system", 00:16:11.853 "dma_device_type": 1 00:16:11.853 }, 00:16:11.853 { 00:16:11.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.853 "dma_device_type": 2 00:16:11.853 } 00:16:11.853 ], 00:16:11.853 "driver_specific": { 00:16:11.853 "passthru": { 00:16:11.853 "name": "pt2", 00:16:11.853 "base_bdev_name": "malloc2" 00:16:11.854 } 00:16:11.854 } 00:16:11.854 }' 00:16:12.112 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:12.112 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:12.112 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:12.112 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:12.112 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:12.112 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.112 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:12.372 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:12.372 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.372 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:12.372 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:12.372 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:12.372 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:12.372 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:12.372 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:12.648 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:12.648 "name": "pt3", 00:16:12.648 "aliases": [ 00:16:12.648 "2a7501fa-1085-5340-a9ab-76050caf3ee8" 00:16:12.648 ], 00:16:12.648 "product_name": "passthru", 00:16:12.648 "block_size": 512, 00:16:12.648 "num_blocks": 65536, 00:16:12.648 "uuid": "2a7501fa-1085-5340-a9ab-76050caf3ee8", 00:16:12.648 "assigned_rate_limits": { 00:16:12.648 "rw_ios_per_sec": 0, 00:16:12.648 "rw_mbytes_per_sec": 0, 00:16:12.648 "r_mbytes_per_sec": 0, 00:16:12.648 "w_mbytes_per_sec": 0 00:16:12.648 }, 00:16:12.648 "claimed": true, 00:16:12.648 "claim_type": "exclusive_write", 00:16:12.648 "zoned": false, 00:16:12.648 "supported_io_types": { 00:16:12.648 "read": true, 00:16:12.648 "write": true, 00:16:12.648 "unmap": true, 00:16:12.648 "write_zeroes": true, 00:16:12.648 "flush": true, 00:16:12.648 "reset": true, 00:16:12.648 "compare": false, 00:16:12.648 "compare_and_write": false, 00:16:12.648 "abort": true, 00:16:12.648 "nvme_admin": false, 00:16:12.648 "nvme_io": false 00:16:12.648 }, 00:16:12.648 "memory_domains": [ 00:16:12.648 { 00:16:12.648 "dma_device_id": "system", 00:16:12.648 "dma_device_type": 1 00:16:12.648 }, 00:16:12.648 { 00:16:12.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.648 "dma_device_type": 2 00:16:12.648 } 00:16:12.648 ], 00:16:12.648 "driver_specific": { 00:16:12.648 "passthru": { 00:16:12.648 "name": "pt3", 00:16:12.648 "base_bdev_name": "malloc3" 00:16:12.648 } 00:16:12.648 } 00:16:12.648 }' 00:16:12.648 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:12.648 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:12.648 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:12.648 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:12.910 23:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:12.910 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.910 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:12.910 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:12.910 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.910 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:12.910 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:13.169 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:13.169 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:13.169 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:13.427 [2024-05-14 23:31:36.489682] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.427 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ed619aff-c2c3-46cd-8d6b-a159f798e007 00:16:13.427 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ed619aff-c2c3-46cd-8d6b-a159f798e007 ']' 00:16:13.427 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:13.427 [2024-05-14 23:31:36.697545] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.427 [2024-05-14 23:31:36.697587] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.427 [2024-05-14 23:31:36.697664] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.427 [2024-05-14 23:31:36.697716] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.427 [2024-05-14 23:31:36.697728] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:16:13.685 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.685 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:13.685 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:13.685 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:13.685 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:13.685 23:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:13.944 23:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:13.944 23:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:14.202 23:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.202 23:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:14.460 23:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:14.460 23:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:14.718 23:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:14.975 [2024-05-14 23:31:38.153783] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:14.975 [2024-05-14 23:31:38.155752] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:14.975 [2024-05-14 23:31:38.155825] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:14.975 [2024-05-14 23:31:38.155888] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:14.975 [2024-05-14 23:31:38.155992] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:14.975 [2024-05-14 23:31:38.156045] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:14.975 [2024-05-14 23:31:38.156115] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.975 [2024-05-14 23:31:38.156136] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:16:14.975 request: 00:16:14.975 { 00:16:14.975 "name": "raid_bdev1", 00:16:14.975 "raid_level": "concat", 00:16:14.975 "base_bdevs": [ 00:16:14.975 "malloc1", 00:16:14.975 "malloc2", 00:16:14.975 "malloc3" 00:16:14.975 ], 00:16:14.975 "superblock": false, 00:16:14.975 "strip_size_kb": 64, 00:16:14.976 "method": "bdev_raid_create", 00:16:14.976 "req_id": 1 00:16:14.976 } 00:16:14.976 Got JSON-RPC error response 00:16:14.976 response: 00:16:14.976 { 00:16:14.976 "code": -17, 00:16:14.976 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:14.976 } 00:16:14.976 23:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:14.976 23:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:14.976 23:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:14.976 23:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:14.976 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.976 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:15.233 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:15.233 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:15.233 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.491 [2024-05-14 23:31:38.773771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.491 [2024-05-14 23:31:38.773883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.491 [2024-05-14 23:31:38.773941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d680 00:16:15.491 [2024-05-14 23:31:38.773970] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.491 [2024-05-14 23:31:38.776249] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.491 [2024-05-14 23:31:38.776318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.491 [2024-05-14 23:31:38.776471] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:15.491 [2024-05-14 23:31:38.776562] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:15.750 pt1 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.750 23:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.048 23:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.048 "name": "raid_bdev1", 00:16:16.048 "uuid": "ed619aff-c2c3-46cd-8d6b-a159f798e007", 00:16:16.048 "strip_size_kb": 64, 00:16:16.048 "state": "configuring", 00:16:16.048 "raid_level": "concat", 00:16:16.048 "superblock": true, 00:16:16.048 "num_base_bdevs": 3, 00:16:16.048 "num_base_bdevs_discovered": 1, 00:16:16.048 "num_base_bdevs_operational": 3, 00:16:16.048 "base_bdevs_list": [ 00:16:16.048 { 00:16:16.048 "name": "pt1", 00:16:16.048 "uuid": "2fb7cbca-1b46-551c-85ff-80f4e81aec0e", 00:16:16.048 "is_configured": true, 00:16:16.048 "data_offset": 2048, 00:16:16.048 "data_size": 63488 00:16:16.048 }, 00:16:16.048 { 00:16:16.048 "name": null, 00:16:16.048 "uuid": "049e51ae-3c61-519a-a3aa-d7a18893b2ba", 00:16:16.048 "is_configured": false, 00:16:16.048 "data_offset": 2048, 00:16:16.048 "data_size": 63488 00:16:16.048 }, 00:16:16.048 { 00:16:16.048 "name": null, 00:16:16.048 "uuid": "2a7501fa-1085-5340-a9ab-76050caf3ee8", 00:16:16.048 "is_configured": false, 00:16:16.048 "data_offset": 2048, 00:16:16.048 "data_size": 63488 00:16:16.048 } 00:16:16.048 ] 00:16:16.048 }' 00:16:16.048 23:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.048 23:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.644 23:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:16.644 23:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.901 [2024-05-14 23:31:40.106011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.901 [2024-05-14 23:31:40.106138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.901 [2024-05-14 23:31:40.106332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ee80 00:16:16.901 [2024-05-14 23:31:40.106367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.901 [2024-05-14 23:31:40.106846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.901 [2024-05-14 23:31:40.106892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.901 [2024-05-14 23:31:40.107044] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:16.901 [2024-05-14 23:31:40.107089] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.901 pt2 00:16:16.901 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:17.159 [2024-05-14 23:31:40.330002] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.159 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.418 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.418 "name": "raid_bdev1", 00:16:17.418 "uuid": "ed619aff-c2c3-46cd-8d6b-a159f798e007", 00:16:17.418 "strip_size_kb": 64, 00:16:17.418 "state": "configuring", 00:16:17.418 "raid_level": "concat", 00:16:17.418 "superblock": true, 00:16:17.418 "num_base_bdevs": 3, 00:16:17.418 "num_base_bdevs_discovered": 1, 00:16:17.418 "num_base_bdevs_operational": 3, 00:16:17.418 "base_bdevs_list": [ 00:16:17.418 { 00:16:17.418 "name": "pt1", 00:16:17.418 "uuid": "2fb7cbca-1b46-551c-85ff-80f4e81aec0e", 00:16:17.418 "is_configured": true, 00:16:17.418 "data_offset": 2048, 00:16:17.418 "data_size": 63488 00:16:17.418 }, 00:16:17.418 { 00:16:17.418 "name": null, 00:16:17.418 "uuid": "049e51ae-3c61-519a-a3aa-d7a18893b2ba", 00:16:17.418 "is_configured": false, 00:16:17.418 "data_offset": 2048, 00:16:17.418 "data_size": 63488 00:16:17.418 }, 00:16:17.418 { 00:16:17.418 "name": null, 00:16:17.418 "uuid": "2a7501fa-1085-5340-a9ab-76050caf3ee8", 00:16:17.418 "is_configured": false, 00:16:17.418 "data_offset": 2048, 00:16:17.418 "data_size": 63488 00:16:17.418 } 00:16:17.418 ] 00:16:17.418 }' 00:16:17.418 23:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.418 23:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.986 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:17.986 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:17.986 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.244 [2024-05-14 23:31:41.410166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.244 [2024-05-14 23:31:41.410337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.244 [2024-05-14 23:31:41.410407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030680 00:16:18.244 [2024-05-14 23:31:41.410451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.244 [2024-05-14 23:31:41.410995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.244 [2024-05-14 23:31:41.411064] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.244 [2024-05-14 23:31:41.411630] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:18.244 [2024-05-14 23:31:41.411684] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.244 pt2 00:16:18.244 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.244 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.244 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:18.502 [2024-05-14 23:31:41.626132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:18.502 [2024-05-14 23:31:41.626242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.502 [2024-05-14 23:31:41.626293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031b80 00:16:18.502 [2024-05-14 23:31:41.626325] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.502 [2024-05-14 23:31:41.626719] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.502 [2024-05-14 23:31:41.626759] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:18.502 [2024-05-14 23:31:41.626869] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:18.502 [2024-05-14 23:31:41.626897] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:18.502 [2024-05-14 23:31:41.626987] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:16:18.502 [2024-05-14 23:31:41.627000] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.502 [2024-05-14 23:31:41.627092] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:18.502 [2024-05-14 23:31:41.627324] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:16:18.502 [2024-05-14 23:31:41.627343] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:16:18.502 [2024-05-14 23:31:41.627441] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.502 pt3 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.502 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.760 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.760 "name": "raid_bdev1", 00:16:18.760 "uuid": "ed619aff-c2c3-46cd-8d6b-a159f798e007", 00:16:18.760 "strip_size_kb": 64, 00:16:18.760 "state": "online", 00:16:18.760 "raid_level": "concat", 00:16:18.760 "superblock": true, 00:16:18.760 "num_base_bdevs": 3, 00:16:18.760 "num_base_bdevs_discovered": 3, 00:16:18.760 "num_base_bdevs_operational": 3, 00:16:18.760 "base_bdevs_list": [ 00:16:18.760 { 00:16:18.760 "name": "pt1", 00:16:18.760 "uuid": "2fb7cbca-1b46-551c-85ff-80f4e81aec0e", 00:16:18.760 "is_configured": true, 00:16:18.760 "data_offset": 2048, 00:16:18.760 "data_size": 63488 00:16:18.760 }, 00:16:18.760 { 00:16:18.760 "name": "pt2", 00:16:18.760 "uuid": "049e51ae-3c61-519a-a3aa-d7a18893b2ba", 00:16:18.760 "is_configured": true, 00:16:18.760 "data_offset": 2048, 00:16:18.760 "data_size": 63488 00:16:18.760 }, 00:16:18.760 { 00:16:18.760 "name": "pt3", 00:16:18.760 "uuid": "2a7501fa-1085-5340-a9ab-76050caf3ee8", 00:16:18.760 "is_configured": true, 00:16:18.760 "data_offset": 2048, 00:16:18.760 "data_size": 63488 00:16:18.760 } 00:16:18.760 ] 00:16:18.760 }' 00:16:18.760 23:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.760 23:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.326 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:19.326 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:16:19.326 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:19.326 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:19.326 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:19.326 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:19.326 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:19.327 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.584 [2024-05-14 23:31:42.838505] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.584 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:19.584 "name": "raid_bdev1", 00:16:19.584 "aliases": [ 00:16:19.584 "ed619aff-c2c3-46cd-8d6b-a159f798e007" 00:16:19.584 ], 00:16:19.584 "product_name": "Raid Volume", 00:16:19.584 "block_size": 512, 00:16:19.584 "num_blocks": 190464, 00:16:19.584 "uuid": "ed619aff-c2c3-46cd-8d6b-a159f798e007", 00:16:19.584 "assigned_rate_limits": { 00:16:19.584 "rw_ios_per_sec": 0, 00:16:19.584 "rw_mbytes_per_sec": 0, 00:16:19.584 "r_mbytes_per_sec": 0, 00:16:19.584 "w_mbytes_per_sec": 0 00:16:19.584 }, 00:16:19.584 "claimed": false, 00:16:19.584 "zoned": false, 00:16:19.584 "supported_io_types": { 00:16:19.584 "read": true, 00:16:19.584 "write": true, 00:16:19.584 "unmap": true, 00:16:19.584 "write_zeroes": true, 00:16:19.584 "flush": true, 00:16:19.584 "reset": true, 00:16:19.584 "compare": false, 00:16:19.584 "compare_and_write": false, 00:16:19.584 "abort": false, 00:16:19.584 "nvme_admin": false, 00:16:19.584 "nvme_io": false 00:16:19.584 }, 00:16:19.584 "memory_domains": [ 00:16:19.585 { 00:16:19.585 "dma_device_id": "system", 00:16:19.585 "dma_device_type": 1 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.585 "dma_device_type": 2 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "dma_device_id": "system", 00:16:19.585 "dma_device_type": 1 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.585 "dma_device_type": 2 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "dma_device_id": "system", 00:16:19.585 "dma_device_type": 1 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.585 "dma_device_type": 2 00:16:19.585 } 00:16:19.585 ], 00:16:19.585 "driver_specific": { 00:16:19.585 "raid": { 00:16:19.585 "uuid": "ed619aff-c2c3-46cd-8d6b-a159f798e007", 00:16:19.585 "strip_size_kb": 64, 00:16:19.585 "state": "online", 00:16:19.585 "raid_level": "concat", 00:16:19.585 "superblock": true, 00:16:19.585 "num_base_bdevs": 3, 00:16:19.585 "num_base_bdevs_discovered": 3, 00:16:19.585 "num_base_bdevs_operational": 3, 00:16:19.585 "base_bdevs_list": [ 00:16:19.585 { 00:16:19.585 "name": "pt1", 00:16:19.585 "uuid": "2fb7cbca-1b46-551c-85ff-80f4e81aec0e", 00:16:19.585 "is_configured": true, 00:16:19.585 "data_offset": 2048, 00:16:19.585 "data_size": 63488 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "name": "pt2", 00:16:19.585 "uuid": "049e51ae-3c61-519a-a3aa-d7a18893b2ba", 00:16:19.585 "is_configured": true, 00:16:19.585 "data_offset": 2048, 00:16:19.585 "data_size": 63488 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "name": "pt3", 00:16:19.585 "uuid": "2a7501fa-1085-5340-a9ab-76050caf3ee8", 00:16:19.585 "is_configured": true, 00:16:19.585 "data_offset": 2048, 00:16:19.585 "data_size": 63488 00:16:19.585 } 00:16:19.585 ] 00:16:19.585 } 00:16:19.585 } 00:16:19.585 }' 00:16:19.585 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.843 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:16:19.843 pt2 00:16:19.843 pt3' 00:16:19.843 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:19.843 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:19.843 23:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:20.101 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:20.101 "name": "pt1", 00:16:20.101 "aliases": [ 00:16:20.101 "2fb7cbca-1b46-551c-85ff-80f4e81aec0e" 00:16:20.101 ], 00:16:20.101 "product_name": "passthru", 00:16:20.101 "block_size": 512, 00:16:20.101 "num_blocks": 65536, 00:16:20.101 "uuid": "2fb7cbca-1b46-551c-85ff-80f4e81aec0e", 00:16:20.101 "assigned_rate_limits": { 00:16:20.101 "rw_ios_per_sec": 0, 00:16:20.101 "rw_mbytes_per_sec": 0, 00:16:20.101 "r_mbytes_per_sec": 0, 00:16:20.101 "w_mbytes_per_sec": 0 00:16:20.101 }, 00:16:20.101 "claimed": true, 00:16:20.101 "claim_type": "exclusive_write", 00:16:20.101 "zoned": false, 00:16:20.101 "supported_io_types": { 00:16:20.101 "read": true, 00:16:20.101 "write": true, 00:16:20.101 "unmap": true, 00:16:20.101 "write_zeroes": true, 00:16:20.101 "flush": true, 00:16:20.101 "reset": true, 00:16:20.101 "compare": false, 00:16:20.101 "compare_and_write": false, 00:16:20.101 "abort": true, 00:16:20.101 "nvme_admin": false, 00:16:20.101 "nvme_io": false 00:16:20.101 }, 00:16:20.101 "memory_domains": [ 00:16:20.101 { 00:16:20.101 "dma_device_id": "system", 00:16:20.101 "dma_device_type": 1 00:16:20.101 }, 00:16:20.101 { 00:16:20.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.101 "dma_device_type": 2 00:16:20.101 } 00:16:20.101 ], 00:16:20.101 "driver_specific": { 00:16:20.101 "passthru": { 00:16:20.101 "name": "pt1", 00:16:20.101 "base_bdev_name": "malloc1" 00:16:20.101 } 00:16:20.101 } 00:16:20.101 }' 00:16:20.101 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:20.101 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:20.101 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:20.102 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:20.102 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:20.367 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:20.367 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:20.368 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:20.368 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:20.368 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:20.368 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:20.368 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:20.368 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:20.368 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:20.368 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:20.628 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:20.628 "name": "pt2", 00:16:20.628 "aliases": [ 00:16:20.628 "049e51ae-3c61-519a-a3aa-d7a18893b2ba" 00:16:20.628 ], 00:16:20.628 "product_name": "passthru", 00:16:20.628 "block_size": 512, 00:16:20.628 "num_blocks": 65536, 00:16:20.628 "uuid": "049e51ae-3c61-519a-a3aa-d7a18893b2ba", 00:16:20.628 "assigned_rate_limits": { 00:16:20.628 "rw_ios_per_sec": 0, 00:16:20.628 "rw_mbytes_per_sec": 0, 00:16:20.628 "r_mbytes_per_sec": 0, 00:16:20.628 "w_mbytes_per_sec": 0 00:16:20.628 }, 00:16:20.628 "claimed": true, 00:16:20.628 "claim_type": "exclusive_write", 00:16:20.628 "zoned": false, 00:16:20.628 "supported_io_types": { 00:16:20.628 "read": true, 00:16:20.628 "write": true, 00:16:20.628 "unmap": true, 00:16:20.628 "write_zeroes": true, 00:16:20.628 "flush": true, 00:16:20.628 "reset": true, 00:16:20.628 "compare": false, 00:16:20.628 "compare_and_write": false, 00:16:20.628 "abort": true, 00:16:20.628 "nvme_admin": false, 00:16:20.628 "nvme_io": false 00:16:20.628 }, 00:16:20.628 "memory_domains": [ 00:16:20.628 { 00:16:20.628 "dma_device_id": "system", 00:16:20.628 "dma_device_type": 1 00:16:20.628 }, 00:16:20.628 { 00:16:20.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.628 "dma_device_type": 2 00:16:20.628 } 00:16:20.628 ], 00:16:20.628 "driver_specific": { 00:16:20.628 "passthru": { 00:16:20.628 "name": "pt2", 00:16:20.628 "base_bdev_name": "malloc2" 00:16:20.628 } 00:16:20.628 } 00:16:20.628 }' 00:16:20.628 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:20.887 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:20.887 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:20.887 23:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:20.887 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:20.887 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:20.887 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:20.887 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:20.887 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:20.887 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:21.146 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:21.146 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:21.146 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:21.146 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:21.146 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:21.404 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:21.404 "name": "pt3", 00:16:21.404 "aliases": [ 00:16:21.404 "2a7501fa-1085-5340-a9ab-76050caf3ee8" 00:16:21.404 ], 00:16:21.404 "product_name": "passthru", 00:16:21.404 "block_size": 512, 00:16:21.404 "num_blocks": 65536, 00:16:21.404 "uuid": "2a7501fa-1085-5340-a9ab-76050caf3ee8", 00:16:21.404 "assigned_rate_limits": { 00:16:21.404 "rw_ios_per_sec": 0, 00:16:21.404 "rw_mbytes_per_sec": 0, 00:16:21.404 "r_mbytes_per_sec": 0, 00:16:21.404 "w_mbytes_per_sec": 0 00:16:21.404 }, 00:16:21.404 "claimed": true, 00:16:21.404 "claim_type": "exclusive_write", 00:16:21.404 "zoned": false, 00:16:21.404 "supported_io_types": { 00:16:21.404 "read": true, 00:16:21.404 "write": true, 00:16:21.404 "unmap": true, 00:16:21.404 "write_zeroes": true, 00:16:21.404 "flush": true, 00:16:21.404 "reset": true, 00:16:21.404 "compare": false, 00:16:21.404 "compare_and_write": false, 00:16:21.404 "abort": true, 00:16:21.404 "nvme_admin": false, 00:16:21.404 "nvme_io": false 00:16:21.404 }, 00:16:21.404 "memory_domains": [ 00:16:21.404 { 00:16:21.404 "dma_device_id": "system", 00:16:21.404 "dma_device_type": 1 00:16:21.404 }, 00:16:21.404 { 00:16:21.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.404 "dma_device_type": 2 00:16:21.404 } 00:16:21.404 ], 00:16:21.404 "driver_specific": { 00:16:21.404 "passthru": { 00:16:21.404 "name": "pt3", 00:16:21.404 "base_bdev_name": "malloc3" 00:16:21.404 } 00:16:21.404 } 00:16:21.404 }' 00:16:21.404 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:21.404 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:21.404 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:21.404 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:21.404 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:21.404 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:21.404 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:21.663 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:21.663 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:21.663 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:21.663 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:21.663 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:21.663 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:21.663 23:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:21.922 [2024-05-14 23:31:45.025162] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ed619aff-c2c3-46cd-8d6b-a159f798e007 '!=' ed619aff-c2c3-46cd-8d6b-a159f798e007 ']' 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 60908 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 60908 ']' 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 60908 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60908 00:16:21.922 killing process with pid 60908 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60908' 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 60908 00:16:21.922 23:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 60908 00:16:21.922 [2024-05-14 23:31:45.061880] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.922 [2024-05-14 23:31:45.061957] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.922 [2024-05-14 23:31:45.062002] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.922 [2024-05-14 23:31:45.062013] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:16:22.181 [2024-05-14 23:31:45.311241] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.557 ************************************ 00:16:23.557 END TEST raid_superblock_test 00:16:23.557 ************************************ 00:16:23.557 23:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:16:23.557 00:16:23.557 real 0m15.943s 00:16:23.557 user 0m28.771s 00:16:23.557 sys 0m1.699s 00:16:23.557 23:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:23.557 23:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.557 23:31:46 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:16:23.557 23:31:46 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:23.557 23:31:46 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:23.557 23:31:46 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:23.557 23:31:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.557 ************************************ 00:16:23.557 START TEST raid_state_function_test 00:16:23.557 ************************************ 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:23.557 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:16:23.558 Process raid pid: 61400 00:16:23.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=61400 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 61400' 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 61400 /var/tmp/spdk-raid.sock 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 61400 ']' 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:23.558 23:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 [2024-05-14 23:31:46.769339] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:16:23.558 [2024-05-14 23:31:46.769578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.817 [2024-05-14 23:31:46.921677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.076 [2024-05-14 23:31:47.138979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.076 [2024-05-14 23:31:47.337501] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.334 23:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:24.334 23:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:16:24.334 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:24.594 [2024-05-14 23:31:47.711740] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.594 [2024-05-14 23:31:47.711815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.594 [2024-05-14 23:31:47.711830] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.594 [2024-05-14 23:31:47.711849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.594 [2024-05-14 23:31:47.711858] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.594 [2024-05-14 23:31:47.711901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.594 23:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.852 23:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.852 "name": "Existed_Raid", 00:16:24.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.853 "strip_size_kb": 0, 00:16:24.853 "state": "configuring", 00:16:24.853 "raid_level": "raid1", 00:16:24.853 "superblock": false, 00:16:24.853 "num_base_bdevs": 3, 00:16:24.853 "num_base_bdevs_discovered": 0, 00:16:24.853 "num_base_bdevs_operational": 3, 00:16:24.853 "base_bdevs_list": [ 00:16:24.853 { 00:16:24.853 "name": "BaseBdev1", 00:16:24.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.853 "is_configured": false, 00:16:24.853 "data_offset": 0, 00:16:24.853 "data_size": 0 00:16:24.853 }, 00:16:24.853 { 00:16:24.853 "name": "BaseBdev2", 00:16:24.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.853 "is_configured": false, 00:16:24.853 "data_offset": 0, 00:16:24.853 "data_size": 0 00:16:24.853 }, 00:16:24.853 { 00:16:24.853 "name": "BaseBdev3", 00:16:24.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.853 "is_configured": false, 00:16:24.853 "data_offset": 0, 00:16:24.853 "data_size": 0 00:16:24.853 } 00:16:24.853 ] 00:16:24.853 }' 00:16:24.853 23:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.853 23:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.788 23:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.788 [2024-05-14 23:31:48.927762] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.788 [2024-05-14 23:31:48.927807] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:16:25.788 23:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:26.046 [2024-05-14 23:31:49.123821] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.046 [2024-05-14 23:31:49.123930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.046 [2024-05-14 23:31:49.123961] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.046 [2024-05-14 23:31:49.123994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.046 [2024-05-14 23:31:49.124004] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.046 [2024-05-14 23:31:49.124027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.046 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.311 [2024-05-14 23:31:49.365602] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.311 BaseBdev1 00:16:26.311 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:16:26.311 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:26.311 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:26.311 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:26.311 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:26.311 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:26.311 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:26.311 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:26.569 [ 00:16:26.569 { 00:16:26.569 "name": "BaseBdev1", 00:16:26.569 "aliases": [ 00:16:26.569 "354ae3d7-105e-4b61-bcd7-5335c882de64" 00:16:26.569 ], 00:16:26.569 "product_name": "Malloc disk", 00:16:26.569 "block_size": 512, 00:16:26.569 "num_blocks": 65536, 00:16:26.569 "uuid": "354ae3d7-105e-4b61-bcd7-5335c882de64", 00:16:26.569 "assigned_rate_limits": { 00:16:26.569 "rw_ios_per_sec": 0, 00:16:26.569 "rw_mbytes_per_sec": 0, 00:16:26.569 "r_mbytes_per_sec": 0, 00:16:26.569 "w_mbytes_per_sec": 0 00:16:26.569 }, 00:16:26.569 "claimed": true, 00:16:26.569 "claim_type": "exclusive_write", 00:16:26.569 "zoned": false, 00:16:26.569 "supported_io_types": { 00:16:26.569 "read": true, 00:16:26.569 "write": true, 00:16:26.569 "unmap": true, 00:16:26.569 "write_zeroes": true, 00:16:26.569 "flush": true, 00:16:26.569 "reset": true, 00:16:26.569 "compare": false, 00:16:26.569 "compare_and_write": false, 00:16:26.569 "abort": true, 00:16:26.569 "nvme_admin": false, 00:16:26.569 "nvme_io": false 00:16:26.569 }, 00:16:26.569 "memory_domains": [ 00:16:26.569 { 00:16:26.569 "dma_device_id": "system", 00:16:26.569 "dma_device_type": 1 00:16:26.569 }, 00:16:26.569 { 00:16:26.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.569 "dma_device_type": 2 00:16:26.569 } 00:16:26.569 ], 00:16:26.570 "driver_specific": {} 00:16:26.570 } 00:16:26.570 ] 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.570 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.828 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.828 "name": "Existed_Raid", 00:16:26.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.828 "strip_size_kb": 0, 00:16:26.828 "state": "configuring", 00:16:26.828 "raid_level": "raid1", 00:16:26.828 "superblock": false, 00:16:26.828 "num_base_bdevs": 3, 00:16:26.828 "num_base_bdevs_discovered": 1, 00:16:26.828 "num_base_bdevs_operational": 3, 00:16:26.828 "base_bdevs_list": [ 00:16:26.828 { 00:16:26.828 "name": "BaseBdev1", 00:16:26.828 "uuid": "354ae3d7-105e-4b61-bcd7-5335c882de64", 00:16:26.828 "is_configured": true, 00:16:26.828 "data_offset": 0, 00:16:26.828 "data_size": 65536 00:16:26.828 }, 00:16:26.828 { 00:16:26.828 "name": "BaseBdev2", 00:16:26.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.828 "is_configured": false, 00:16:26.828 "data_offset": 0, 00:16:26.828 "data_size": 0 00:16:26.828 }, 00:16:26.828 { 00:16:26.828 "name": "BaseBdev3", 00:16:26.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.828 "is_configured": false, 00:16:26.828 "data_offset": 0, 00:16:26.828 "data_size": 0 00:16:26.828 } 00:16:26.828 ] 00:16:26.828 }' 00:16:26.828 23:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.828 23:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.396 23:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:27.655 [2024-05-14 23:31:50.845798] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.655 [2024-05-14 23:31:50.845855] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:16:27.655 23:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:27.914 [2024-05-14 23:31:51.089861] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.914 [2024-05-14 23:31:51.091677] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.914 [2024-05-14 23:31:51.091752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.914 [2024-05-14 23:31:51.091770] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.914 [2024-05-14 23:31:51.091812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.914 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.172 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.172 "name": "Existed_Raid", 00:16:28.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.172 "strip_size_kb": 0, 00:16:28.172 "state": "configuring", 00:16:28.172 "raid_level": "raid1", 00:16:28.172 "superblock": false, 00:16:28.172 "num_base_bdevs": 3, 00:16:28.172 "num_base_bdevs_discovered": 1, 00:16:28.172 "num_base_bdevs_operational": 3, 00:16:28.172 "base_bdevs_list": [ 00:16:28.172 { 00:16:28.172 "name": "BaseBdev1", 00:16:28.172 "uuid": "354ae3d7-105e-4b61-bcd7-5335c882de64", 00:16:28.172 "is_configured": true, 00:16:28.172 "data_offset": 0, 00:16:28.172 "data_size": 65536 00:16:28.172 }, 00:16:28.172 { 00:16:28.172 "name": "BaseBdev2", 00:16:28.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.172 "is_configured": false, 00:16:28.172 "data_offset": 0, 00:16:28.172 "data_size": 0 00:16:28.172 }, 00:16:28.172 { 00:16:28.172 "name": "BaseBdev3", 00:16:28.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.172 "is_configured": false, 00:16:28.172 "data_offset": 0, 00:16:28.172 "data_size": 0 00:16:28.172 } 00:16:28.172 ] 00:16:28.172 }' 00:16:28.172 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.172 23:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.743 23:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:29.002 [2024-05-14 23:31:52.144694] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.002 BaseBdev2 00:16:29.002 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:16:29.002 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:29.002 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:29.002 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:29.002 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:29.002 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:29.002 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.261 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.520 [ 00:16:29.520 { 00:16:29.520 "name": "BaseBdev2", 00:16:29.520 "aliases": [ 00:16:29.520 "6b6c2bd2-7d31-4f46-8842-94af528670e8" 00:16:29.520 ], 00:16:29.520 "product_name": "Malloc disk", 00:16:29.520 "block_size": 512, 00:16:29.520 "num_blocks": 65536, 00:16:29.520 "uuid": "6b6c2bd2-7d31-4f46-8842-94af528670e8", 00:16:29.520 "assigned_rate_limits": { 00:16:29.520 "rw_ios_per_sec": 0, 00:16:29.520 "rw_mbytes_per_sec": 0, 00:16:29.520 "r_mbytes_per_sec": 0, 00:16:29.520 "w_mbytes_per_sec": 0 00:16:29.520 }, 00:16:29.520 "claimed": true, 00:16:29.520 "claim_type": "exclusive_write", 00:16:29.520 "zoned": false, 00:16:29.520 "supported_io_types": { 00:16:29.520 "read": true, 00:16:29.520 "write": true, 00:16:29.520 "unmap": true, 00:16:29.520 "write_zeroes": true, 00:16:29.520 "flush": true, 00:16:29.520 "reset": true, 00:16:29.520 "compare": false, 00:16:29.520 "compare_and_write": false, 00:16:29.520 "abort": true, 00:16:29.520 "nvme_admin": false, 00:16:29.520 "nvme_io": false 00:16:29.520 }, 00:16:29.520 "memory_domains": [ 00:16:29.520 { 00:16:29.520 "dma_device_id": "system", 00:16:29.520 "dma_device_type": 1 00:16:29.520 }, 00:16:29.520 { 00:16:29.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.520 "dma_device_type": 2 00:16:29.520 } 00:16:29.520 ], 00:16:29.520 "driver_specific": {} 00:16:29.520 } 00:16:29.520 ] 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.520 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.852 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.852 "name": "Existed_Raid", 00:16:29.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.852 "strip_size_kb": 0, 00:16:29.852 "state": "configuring", 00:16:29.852 "raid_level": "raid1", 00:16:29.852 "superblock": false, 00:16:29.852 "num_base_bdevs": 3, 00:16:29.852 "num_base_bdevs_discovered": 2, 00:16:29.852 "num_base_bdevs_operational": 3, 00:16:29.852 "base_bdevs_list": [ 00:16:29.852 { 00:16:29.852 "name": "BaseBdev1", 00:16:29.852 "uuid": "354ae3d7-105e-4b61-bcd7-5335c882de64", 00:16:29.852 "is_configured": true, 00:16:29.852 "data_offset": 0, 00:16:29.852 "data_size": 65536 00:16:29.852 }, 00:16:29.852 { 00:16:29.852 "name": "BaseBdev2", 00:16:29.852 "uuid": "6b6c2bd2-7d31-4f46-8842-94af528670e8", 00:16:29.852 "is_configured": true, 00:16:29.852 "data_offset": 0, 00:16:29.852 "data_size": 65536 00:16:29.852 }, 00:16:29.852 { 00:16:29.852 "name": "BaseBdev3", 00:16:29.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.852 "is_configured": false, 00:16:29.852 "data_offset": 0, 00:16:29.852 "data_size": 0 00:16:29.852 } 00:16:29.852 ] 00:16:29.852 }' 00:16:29.852 23:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.852 23:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.418 23:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:30.677 [2024-05-14 23:31:53.752134] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.677 [2024-05-14 23:31:53.752206] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:16:30.677 [2024-05-14 23:31:53.752217] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:30.677 [2024-05-14 23:31:53.752349] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:30.677 [2024-05-14 23:31:53.752610] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:16:30.677 [2024-05-14 23:31:53.752628] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:16:30.677 [2024-05-14 23:31:53.752807] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.677 BaseBdev3 00:16:30.677 23:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:16:30.677 23:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:30.677 23:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:30.677 23:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:30.677 23:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:30.677 23:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:30.677 23:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.937 23:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:30.937 [ 00:16:30.937 { 00:16:30.937 "name": "BaseBdev3", 00:16:30.937 "aliases": [ 00:16:30.937 "81d8099c-3bd4-491b-bc86-ed2e855505f5" 00:16:30.937 ], 00:16:30.937 "product_name": "Malloc disk", 00:16:30.937 "block_size": 512, 00:16:30.937 "num_blocks": 65536, 00:16:30.937 "uuid": "81d8099c-3bd4-491b-bc86-ed2e855505f5", 00:16:30.937 "assigned_rate_limits": { 00:16:30.937 "rw_ios_per_sec": 0, 00:16:30.937 "rw_mbytes_per_sec": 0, 00:16:30.937 "r_mbytes_per_sec": 0, 00:16:30.937 "w_mbytes_per_sec": 0 00:16:30.937 }, 00:16:30.937 "claimed": true, 00:16:30.937 "claim_type": "exclusive_write", 00:16:30.937 "zoned": false, 00:16:30.937 "supported_io_types": { 00:16:30.937 "read": true, 00:16:30.937 "write": true, 00:16:30.937 "unmap": true, 00:16:30.937 "write_zeroes": true, 00:16:30.937 "flush": true, 00:16:30.937 "reset": true, 00:16:30.937 "compare": false, 00:16:30.937 "compare_and_write": false, 00:16:30.937 "abort": true, 00:16:30.937 "nvme_admin": false, 00:16:30.937 "nvme_io": false 00:16:30.937 }, 00:16:30.937 "memory_domains": [ 00:16:30.937 { 00:16:30.937 "dma_device_id": "system", 00:16:30.937 "dma_device_type": 1 00:16:30.937 }, 00:16:30.937 { 00:16:30.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.937 "dma_device_type": 2 00:16:30.937 } 00:16:30.937 ], 00:16:30.937 "driver_specific": {} 00:16:30.937 } 00:16:30.937 ] 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.937 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.196 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.196 "name": "Existed_Raid", 00:16:31.196 "uuid": "b2ff3f27-0ca3-41dd-8beb-53de176da9b3", 00:16:31.196 "strip_size_kb": 0, 00:16:31.196 "state": "online", 00:16:31.196 "raid_level": "raid1", 00:16:31.196 "superblock": false, 00:16:31.196 "num_base_bdevs": 3, 00:16:31.196 "num_base_bdevs_discovered": 3, 00:16:31.196 "num_base_bdevs_operational": 3, 00:16:31.196 "base_bdevs_list": [ 00:16:31.196 { 00:16:31.196 "name": "BaseBdev1", 00:16:31.196 "uuid": "354ae3d7-105e-4b61-bcd7-5335c882de64", 00:16:31.196 "is_configured": true, 00:16:31.196 "data_offset": 0, 00:16:31.196 "data_size": 65536 00:16:31.196 }, 00:16:31.196 { 00:16:31.196 "name": "BaseBdev2", 00:16:31.196 "uuid": "6b6c2bd2-7d31-4f46-8842-94af528670e8", 00:16:31.196 "is_configured": true, 00:16:31.196 "data_offset": 0, 00:16:31.196 "data_size": 65536 00:16:31.196 }, 00:16:31.196 { 00:16:31.196 "name": "BaseBdev3", 00:16:31.196 "uuid": "81d8099c-3bd4-491b-bc86-ed2e855505f5", 00:16:31.196 "is_configured": true, 00:16:31.196 "data_offset": 0, 00:16:31.196 "data_size": 65536 00:16:31.196 } 00:16:31.196 ] 00:16:31.196 }' 00:16:31.196 23:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.196 23:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:32.133 [2024-05-14 23:31:55.324763] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:32.133 "name": "Existed_Raid", 00:16:32.133 "aliases": [ 00:16:32.133 "b2ff3f27-0ca3-41dd-8beb-53de176da9b3" 00:16:32.133 ], 00:16:32.133 "product_name": "Raid Volume", 00:16:32.133 "block_size": 512, 00:16:32.133 "num_blocks": 65536, 00:16:32.133 "uuid": "b2ff3f27-0ca3-41dd-8beb-53de176da9b3", 00:16:32.133 "assigned_rate_limits": { 00:16:32.133 "rw_ios_per_sec": 0, 00:16:32.133 "rw_mbytes_per_sec": 0, 00:16:32.133 "r_mbytes_per_sec": 0, 00:16:32.133 "w_mbytes_per_sec": 0 00:16:32.133 }, 00:16:32.133 "claimed": false, 00:16:32.133 "zoned": false, 00:16:32.133 "supported_io_types": { 00:16:32.133 "read": true, 00:16:32.133 "write": true, 00:16:32.133 "unmap": false, 00:16:32.133 "write_zeroes": true, 00:16:32.133 "flush": false, 00:16:32.133 "reset": true, 00:16:32.133 "compare": false, 00:16:32.133 "compare_and_write": false, 00:16:32.133 "abort": false, 00:16:32.133 "nvme_admin": false, 00:16:32.133 "nvme_io": false 00:16:32.133 }, 00:16:32.133 "memory_domains": [ 00:16:32.133 { 00:16:32.133 "dma_device_id": "system", 00:16:32.133 "dma_device_type": 1 00:16:32.133 }, 00:16:32.133 { 00:16:32.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.133 "dma_device_type": 2 00:16:32.133 }, 00:16:32.133 { 00:16:32.133 "dma_device_id": "system", 00:16:32.133 "dma_device_type": 1 00:16:32.133 }, 00:16:32.133 { 00:16:32.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.133 "dma_device_type": 2 00:16:32.133 }, 00:16:32.133 { 00:16:32.133 "dma_device_id": "system", 00:16:32.133 "dma_device_type": 1 00:16:32.133 }, 00:16:32.133 { 00:16:32.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.133 "dma_device_type": 2 00:16:32.133 } 00:16:32.133 ], 00:16:32.133 "driver_specific": { 00:16:32.133 "raid": { 00:16:32.133 "uuid": "b2ff3f27-0ca3-41dd-8beb-53de176da9b3", 00:16:32.133 "strip_size_kb": 0, 00:16:32.133 "state": "online", 00:16:32.133 "raid_level": "raid1", 00:16:32.133 "superblock": false, 00:16:32.133 "num_base_bdevs": 3, 00:16:32.133 "num_base_bdevs_discovered": 3, 00:16:32.133 "num_base_bdevs_operational": 3, 00:16:32.133 "base_bdevs_list": [ 00:16:32.133 { 00:16:32.133 "name": "BaseBdev1", 00:16:32.133 "uuid": "354ae3d7-105e-4b61-bcd7-5335c882de64", 00:16:32.133 "is_configured": true, 00:16:32.133 "data_offset": 0, 00:16:32.133 "data_size": 65536 00:16:32.133 }, 00:16:32.133 { 00:16:32.133 "name": "BaseBdev2", 00:16:32.133 "uuid": "6b6c2bd2-7d31-4f46-8842-94af528670e8", 00:16:32.133 "is_configured": true, 00:16:32.133 "data_offset": 0, 00:16:32.133 "data_size": 65536 00:16:32.133 }, 00:16:32.133 { 00:16:32.133 "name": "BaseBdev3", 00:16:32.133 "uuid": "81d8099c-3bd4-491b-bc86-ed2e855505f5", 00:16:32.133 "is_configured": true, 00:16:32.133 "data_offset": 0, 00:16:32.133 "data_size": 65536 00:16:32.133 } 00:16:32.133 ] 00:16:32.133 } 00:16:32.133 } 00:16:32.133 }' 00:16:32.133 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.391 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:16:32.391 BaseBdev2 00:16:32.391 BaseBdev3' 00:16:32.391 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:32.391 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:32.391 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:32.391 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:32.391 "name": "BaseBdev1", 00:16:32.391 "aliases": [ 00:16:32.391 "354ae3d7-105e-4b61-bcd7-5335c882de64" 00:16:32.391 ], 00:16:32.391 "product_name": "Malloc disk", 00:16:32.391 "block_size": 512, 00:16:32.391 "num_blocks": 65536, 00:16:32.391 "uuid": "354ae3d7-105e-4b61-bcd7-5335c882de64", 00:16:32.391 "assigned_rate_limits": { 00:16:32.391 "rw_ios_per_sec": 0, 00:16:32.391 "rw_mbytes_per_sec": 0, 00:16:32.391 "r_mbytes_per_sec": 0, 00:16:32.391 "w_mbytes_per_sec": 0 00:16:32.391 }, 00:16:32.391 "claimed": true, 00:16:32.391 "claim_type": "exclusive_write", 00:16:32.391 "zoned": false, 00:16:32.391 "supported_io_types": { 00:16:32.391 "read": true, 00:16:32.391 "write": true, 00:16:32.391 "unmap": true, 00:16:32.391 "write_zeroes": true, 00:16:32.391 "flush": true, 00:16:32.391 "reset": true, 00:16:32.391 "compare": false, 00:16:32.391 "compare_and_write": false, 00:16:32.392 "abort": true, 00:16:32.392 "nvme_admin": false, 00:16:32.392 "nvme_io": false 00:16:32.392 }, 00:16:32.392 "memory_domains": [ 00:16:32.392 { 00:16:32.392 "dma_device_id": "system", 00:16:32.392 "dma_device_type": 1 00:16:32.392 }, 00:16:32.392 { 00:16:32.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.392 "dma_device_type": 2 00:16:32.392 } 00:16:32.392 ], 00:16:32.392 "driver_specific": {} 00:16:32.392 }' 00:16:32.392 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:32.650 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:32.650 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:32.650 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:32.650 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:32.650 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.650 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:32.650 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:32.909 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.909 23:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:32.909 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:32.909 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:32.909 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:32.909 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:32.909 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:33.169 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:33.169 "name": "BaseBdev2", 00:16:33.169 "aliases": [ 00:16:33.169 "6b6c2bd2-7d31-4f46-8842-94af528670e8" 00:16:33.169 ], 00:16:33.169 "product_name": "Malloc disk", 00:16:33.169 "block_size": 512, 00:16:33.169 "num_blocks": 65536, 00:16:33.169 "uuid": "6b6c2bd2-7d31-4f46-8842-94af528670e8", 00:16:33.169 "assigned_rate_limits": { 00:16:33.169 "rw_ios_per_sec": 0, 00:16:33.169 "rw_mbytes_per_sec": 0, 00:16:33.169 "r_mbytes_per_sec": 0, 00:16:33.169 "w_mbytes_per_sec": 0 00:16:33.169 }, 00:16:33.169 "claimed": true, 00:16:33.169 "claim_type": "exclusive_write", 00:16:33.169 "zoned": false, 00:16:33.169 "supported_io_types": { 00:16:33.169 "read": true, 00:16:33.169 "write": true, 00:16:33.169 "unmap": true, 00:16:33.169 "write_zeroes": true, 00:16:33.169 "flush": true, 00:16:33.169 "reset": true, 00:16:33.169 "compare": false, 00:16:33.169 "compare_and_write": false, 00:16:33.169 "abort": true, 00:16:33.169 "nvme_admin": false, 00:16:33.169 "nvme_io": false 00:16:33.169 }, 00:16:33.169 "memory_domains": [ 00:16:33.169 { 00:16:33.169 "dma_device_id": "system", 00:16:33.169 "dma_device_type": 1 00:16:33.169 }, 00:16:33.169 { 00:16:33.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.169 "dma_device_type": 2 00:16:33.169 } 00:16:33.169 ], 00:16:33.169 "driver_specific": {} 00:16:33.169 }' 00:16:33.169 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:33.169 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:33.169 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:33.169 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:33.428 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:33.428 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:33.428 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:33.428 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:33.428 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.428 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:33.687 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:33.687 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:33.687 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:33.687 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:33.687 23:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:33.946 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:33.946 "name": "BaseBdev3", 00:16:33.946 "aliases": [ 00:16:33.946 "81d8099c-3bd4-491b-bc86-ed2e855505f5" 00:16:33.946 ], 00:16:33.946 "product_name": "Malloc disk", 00:16:33.946 "block_size": 512, 00:16:33.946 "num_blocks": 65536, 00:16:33.946 "uuid": "81d8099c-3bd4-491b-bc86-ed2e855505f5", 00:16:33.946 "assigned_rate_limits": { 00:16:33.946 "rw_ios_per_sec": 0, 00:16:33.946 "rw_mbytes_per_sec": 0, 00:16:33.946 "r_mbytes_per_sec": 0, 00:16:33.946 "w_mbytes_per_sec": 0 00:16:33.946 }, 00:16:33.946 "claimed": true, 00:16:33.946 "claim_type": "exclusive_write", 00:16:33.946 "zoned": false, 00:16:33.946 "supported_io_types": { 00:16:33.946 "read": true, 00:16:33.946 "write": true, 00:16:33.946 "unmap": true, 00:16:33.946 "write_zeroes": true, 00:16:33.946 "flush": true, 00:16:33.946 "reset": true, 00:16:33.946 "compare": false, 00:16:33.946 "compare_and_write": false, 00:16:33.946 "abort": true, 00:16:33.946 "nvme_admin": false, 00:16:33.946 "nvme_io": false 00:16:33.946 }, 00:16:33.946 "memory_domains": [ 00:16:33.946 { 00:16:33.947 "dma_device_id": "system", 00:16:33.947 "dma_device_type": 1 00:16:33.947 }, 00:16:33.947 { 00:16:33.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.947 "dma_device_type": 2 00:16:33.947 } 00:16:33.947 ], 00:16:33.947 "driver_specific": {} 00:16:33.947 }' 00:16:33.947 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:33.947 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:33.947 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:33.947 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:33.947 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:34.205 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:34.205 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:34.206 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:34.206 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:34.206 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:34.206 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:34.206 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:34.206 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:34.465 [2024-05-14 23:31:57.632881] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.465 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.724 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.724 "name": "Existed_Raid", 00:16:34.724 "uuid": "b2ff3f27-0ca3-41dd-8beb-53de176da9b3", 00:16:34.724 "strip_size_kb": 0, 00:16:34.724 "state": "online", 00:16:34.724 "raid_level": "raid1", 00:16:34.724 "superblock": false, 00:16:34.724 "num_base_bdevs": 3, 00:16:34.724 "num_base_bdevs_discovered": 2, 00:16:34.724 "num_base_bdevs_operational": 2, 00:16:34.724 "base_bdevs_list": [ 00:16:34.724 { 00:16:34.724 "name": null, 00:16:34.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.724 "is_configured": false, 00:16:34.724 "data_offset": 0, 00:16:34.724 "data_size": 65536 00:16:34.724 }, 00:16:34.724 { 00:16:34.724 "name": "BaseBdev2", 00:16:34.724 "uuid": "6b6c2bd2-7d31-4f46-8842-94af528670e8", 00:16:34.724 "is_configured": true, 00:16:34.724 "data_offset": 0, 00:16:34.724 "data_size": 65536 00:16:34.724 }, 00:16:34.724 { 00:16:34.724 "name": "BaseBdev3", 00:16:34.725 "uuid": "81d8099c-3bd4-491b-bc86-ed2e855505f5", 00:16:34.725 "is_configured": true, 00:16:34.725 "data_offset": 0, 00:16:34.725 "data_size": 65536 00:16:34.725 } 00:16:34.725 ] 00:16:34.725 }' 00:16:34.725 23:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.725 23:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.662 23:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:35.662 23:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:35.662 23:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.662 23:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:35.662 23:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:35.662 23:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.662 23:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:35.921 [2024-05-14 23:31:59.127550] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.179 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:36.179 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:36.179 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.179 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:16:36.179 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:16:36.179 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.179 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:36.437 [2024-05-14 23:31:59.672066] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:36.437 [2024-05-14 23:31:59.672141] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.695 [2024-05-14 23:31:59.751851] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.695 [2024-05-14 23:31:59.751973] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.695 [2024-05-14 23:31:59.751988] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:16:36.695 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:36.695 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:36.695 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.695 23:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:16:36.954 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:16:36.954 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:16:36.954 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:16:36.954 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:16:36.954 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:36.954 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:36.954 BaseBdev2 00:16:37.213 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:16:37.213 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:37.213 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:37.213 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:37.213 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:37.213 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:37.213 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.472 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.472 [ 00:16:37.472 { 00:16:37.472 "name": "BaseBdev2", 00:16:37.472 "aliases": [ 00:16:37.472 "92638523-3a3f-467d-97e5-c6abc6fb3e07" 00:16:37.472 ], 00:16:37.472 "product_name": "Malloc disk", 00:16:37.472 "block_size": 512, 00:16:37.472 "num_blocks": 65536, 00:16:37.472 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:37.472 "assigned_rate_limits": { 00:16:37.472 "rw_ios_per_sec": 0, 00:16:37.472 "rw_mbytes_per_sec": 0, 00:16:37.472 "r_mbytes_per_sec": 0, 00:16:37.472 "w_mbytes_per_sec": 0 00:16:37.472 }, 00:16:37.472 "claimed": false, 00:16:37.472 "zoned": false, 00:16:37.472 "supported_io_types": { 00:16:37.472 "read": true, 00:16:37.472 "write": true, 00:16:37.472 "unmap": true, 00:16:37.472 "write_zeroes": true, 00:16:37.472 "flush": true, 00:16:37.472 "reset": true, 00:16:37.472 "compare": false, 00:16:37.472 "compare_and_write": false, 00:16:37.472 "abort": true, 00:16:37.472 "nvme_admin": false, 00:16:37.472 "nvme_io": false 00:16:37.472 }, 00:16:37.472 "memory_domains": [ 00:16:37.472 { 00:16:37.472 "dma_device_id": "system", 00:16:37.472 "dma_device_type": 1 00:16:37.472 }, 00:16:37.472 { 00:16:37.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.472 "dma_device_type": 2 00:16:37.472 } 00:16:37.472 ], 00:16:37.472 "driver_specific": {} 00:16:37.472 } 00:16:37.472 ] 00:16:37.472 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:37.472 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:37.472 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:37.472 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.732 BaseBdev3 00:16:37.732 23:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:16:37.732 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:37.732 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:37.732 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:37.732 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:37.732 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:37.732 23:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.991 23:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.250 [ 00:16:38.250 { 00:16:38.250 "name": "BaseBdev3", 00:16:38.250 "aliases": [ 00:16:38.250 "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9" 00:16:38.250 ], 00:16:38.250 "product_name": "Malloc disk", 00:16:38.250 "block_size": 512, 00:16:38.250 "num_blocks": 65536, 00:16:38.250 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:38.250 "assigned_rate_limits": { 00:16:38.250 "rw_ios_per_sec": 0, 00:16:38.250 "rw_mbytes_per_sec": 0, 00:16:38.250 "r_mbytes_per_sec": 0, 00:16:38.250 "w_mbytes_per_sec": 0 00:16:38.250 }, 00:16:38.250 "claimed": false, 00:16:38.250 "zoned": false, 00:16:38.250 "supported_io_types": { 00:16:38.250 "read": true, 00:16:38.250 "write": true, 00:16:38.250 "unmap": true, 00:16:38.250 "write_zeroes": true, 00:16:38.250 "flush": true, 00:16:38.250 "reset": true, 00:16:38.250 "compare": false, 00:16:38.250 "compare_and_write": false, 00:16:38.250 "abort": true, 00:16:38.250 "nvme_admin": false, 00:16:38.250 "nvme_io": false 00:16:38.250 }, 00:16:38.250 "memory_domains": [ 00:16:38.250 { 00:16:38.250 "dma_device_id": "system", 00:16:38.250 "dma_device_type": 1 00:16:38.250 }, 00:16:38.250 { 00:16:38.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.250 "dma_device_type": 2 00:16:38.250 } 00:16:38.250 ], 00:16:38.250 "driver_specific": {} 00:16:38.250 } 00:16:38.250 ] 00:16:38.250 23:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:38.250 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:16:38.250 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:16:38.250 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:38.508 [2024-05-14 23:32:01.597942] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.508 [2024-05-14 23:32:01.598028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.508 [2024-05-14 23:32:01.598072] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.508 [2024-05-14 23:32:01.599783] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.508 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.766 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.766 "name": "Existed_Raid", 00:16:38.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.766 "strip_size_kb": 0, 00:16:38.766 "state": "configuring", 00:16:38.766 "raid_level": "raid1", 00:16:38.766 "superblock": false, 00:16:38.766 "num_base_bdevs": 3, 00:16:38.766 "num_base_bdevs_discovered": 2, 00:16:38.766 "num_base_bdevs_operational": 3, 00:16:38.766 "base_bdevs_list": [ 00:16:38.766 { 00:16:38.766 "name": "BaseBdev1", 00:16:38.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.766 "is_configured": false, 00:16:38.766 "data_offset": 0, 00:16:38.766 "data_size": 0 00:16:38.766 }, 00:16:38.766 { 00:16:38.766 "name": "BaseBdev2", 00:16:38.766 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:38.766 "is_configured": true, 00:16:38.766 "data_offset": 0, 00:16:38.766 "data_size": 65536 00:16:38.766 }, 00:16:38.766 { 00:16:38.766 "name": "BaseBdev3", 00:16:38.766 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:38.766 "is_configured": true, 00:16:38.766 "data_offset": 0, 00:16:38.766 "data_size": 65536 00:16:38.766 } 00:16:38.766 ] 00:16:38.766 }' 00:16:38.766 23:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.766 23:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.332 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:39.592 [2024-05-14 23:32:02.774108] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.592 23:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.851 23:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.851 "name": "Existed_Raid", 00:16:39.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.851 "strip_size_kb": 0, 00:16:39.851 "state": "configuring", 00:16:39.851 "raid_level": "raid1", 00:16:39.851 "superblock": false, 00:16:39.851 "num_base_bdevs": 3, 00:16:39.851 "num_base_bdevs_discovered": 1, 00:16:39.851 "num_base_bdevs_operational": 3, 00:16:39.851 "base_bdevs_list": [ 00:16:39.851 { 00:16:39.851 "name": "BaseBdev1", 00:16:39.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.851 "is_configured": false, 00:16:39.851 "data_offset": 0, 00:16:39.851 "data_size": 0 00:16:39.851 }, 00:16:39.851 { 00:16:39.851 "name": null, 00:16:39.851 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:39.851 "is_configured": false, 00:16:39.851 "data_offset": 0, 00:16:39.851 "data_size": 65536 00:16:39.851 }, 00:16:39.851 { 00:16:39.851 "name": "BaseBdev3", 00:16:39.851 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:39.851 "is_configured": true, 00:16:39.851 "data_offset": 0, 00:16:39.851 "data_size": 65536 00:16:39.851 } 00:16:39.851 ] 00:16:39.851 }' 00:16:39.851 23:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.851 23:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.799 23:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.799 23:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:40.799 23:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:16:40.799 23:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.059 [2024-05-14 23:32:04.219163] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.059 BaseBdev1 00:16:41.059 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:16:41.059 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:41.059 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:41.059 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:41.059 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:41.059 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:41.059 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.318 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.578 [ 00:16:41.578 { 00:16:41.578 "name": "BaseBdev1", 00:16:41.578 "aliases": [ 00:16:41.578 "35cc6de0-a3fd-4222-8de1-d3df32204e61" 00:16:41.578 ], 00:16:41.578 "product_name": "Malloc disk", 00:16:41.578 "block_size": 512, 00:16:41.578 "num_blocks": 65536, 00:16:41.578 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:41.578 "assigned_rate_limits": { 00:16:41.578 "rw_ios_per_sec": 0, 00:16:41.578 "rw_mbytes_per_sec": 0, 00:16:41.578 "r_mbytes_per_sec": 0, 00:16:41.578 "w_mbytes_per_sec": 0 00:16:41.578 }, 00:16:41.578 "claimed": true, 00:16:41.578 "claim_type": "exclusive_write", 00:16:41.578 "zoned": false, 00:16:41.578 "supported_io_types": { 00:16:41.578 "read": true, 00:16:41.578 "write": true, 00:16:41.578 "unmap": true, 00:16:41.578 "write_zeroes": true, 00:16:41.578 "flush": true, 00:16:41.578 "reset": true, 00:16:41.578 "compare": false, 00:16:41.578 "compare_and_write": false, 00:16:41.578 "abort": true, 00:16:41.578 "nvme_admin": false, 00:16:41.578 "nvme_io": false 00:16:41.578 }, 00:16:41.578 "memory_domains": [ 00:16:41.578 { 00:16:41.578 "dma_device_id": "system", 00:16:41.578 "dma_device_type": 1 00:16:41.578 }, 00:16:41.578 { 00:16:41.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.578 "dma_device_type": 2 00:16:41.578 } 00:16:41.578 ], 00:16:41.578 "driver_specific": {} 00:16:41.578 } 00:16:41.578 ] 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.578 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.837 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.837 "name": "Existed_Raid", 00:16:41.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.837 "strip_size_kb": 0, 00:16:41.837 "state": "configuring", 00:16:41.837 "raid_level": "raid1", 00:16:41.837 "superblock": false, 00:16:41.837 "num_base_bdevs": 3, 00:16:41.837 "num_base_bdevs_discovered": 2, 00:16:41.837 "num_base_bdevs_operational": 3, 00:16:41.837 "base_bdevs_list": [ 00:16:41.837 { 00:16:41.837 "name": "BaseBdev1", 00:16:41.837 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:41.837 "is_configured": true, 00:16:41.837 "data_offset": 0, 00:16:41.837 "data_size": 65536 00:16:41.837 }, 00:16:41.837 { 00:16:41.837 "name": null, 00:16:41.837 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:41.837 "is_configured": false, 00:16:41.837 "data_offset": 0, 00:16:41.837 "data_size": 65536 00:16:41.837 }, 00:16:41.837 { 00:16:41.837 "name": "BaseBdev3", 00:16:41.837 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:41.837 "is_configured": true, 00:16:41.837 "data_offset": 0, 00:16:41.837 "data_size": 65536 00:16:41.837 } 00:16:41.837 ] 00:16:41.837 }' 00:16:41.837 23:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.837 23:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.404 23:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.404 23:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:42.662 23:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:42.662 23:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:42.921 [2024-05-14 23:32:06.023559] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.921 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.922 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.922 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.922 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.181 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.181 "name": "Existed_Raid", 00:16:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.181 "strip_size_kb": 0, 00:16:43.181 "state": "configuring", 00:16:43.181 "raid_level": "raid1", 00:16:43.181 "superblock": false, 00:16:43.181 "num_base_bdevs": 3, 00:16:43.181 "num_base_bdevs_discovered": 1, 00:16:43.181 "num_base_bdevs_operational": 3, 00:16:43.181 "base_bdevs_list": [ 00:16:43.181 { 00:16:43.181 "name": "BaseBdev1", 00:16:43.181 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:43.181 "is_configured": true, 00:16:43.181 "data_offset": 0, 00:16:43.181 "data_size": 65536 00:16:43.181 }, 00:16:43.181 { 00:16:43.181 "name": null, 00:16:43.181 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:43.181 "is_configured": false, 00:16:43.181 "data_offset": 0, 00:16:43.181 "data_size": 65536 00:16:43.181 }, 00:16:43.181 { 00:16:43.181 "name": null, 00:16:43.181 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:43.181 "is_configured": false, 00:16:43.181 "data_offset": 0, 00:16:43.181 "data_size": 65536 00:16:43.181 } 00:16:43.181 ] 00:16:43.181 }' 00:16:43.181 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.181 23:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.749 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.749 23:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.008 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:16:44.008 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:44.268 [2024-05-14 23:32:07.463854] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.268 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.527 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.527 "name": "Existed_Raid", 00:16:44.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.527 "strip_size_kb": 0, 00:16:44.527 "state": "configuring", 00:16:44.527 "raid_level": "raid1", 00:16:44.527 "superblock": false, 00:16:44.527 "num_base_bdevs": 3, 00:16:44.527 "num_base_bdevs_discovered": 2, 00:16:44.527 "num_base_bdevs_operational": 3, 00:16:44.527 "base_bdevs_list": [ 00:16:44.527 { 00:16:44.527 "name": "BaseBdev1", 00:16:44.527 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:44.527 "is_configured": true, 00:16:44.527 "data_offset": 0, 00:16:44.527 "data_size": 65536 00:16:44.527 }, 00:16:44.527 { 00:16:44.527 "name": null, 00:16:44.527 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:44.527 "is_configured": false, 00:16:44.527 "data_offset": 0, 00:16:44.527 "data_size": 65536 00:16:44.527 }, 00:16:44.527 { 00:16:44.527 "name": "BaseBdev3", 00:16:44.527 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:44.527 "is_configured": true, 00:16:44.527 "data_offset": 0, 00:16:44.527 "data_size": 65536 00:16:44.527 } 00:16:44.527 ] 00:16:44.527 }' 00:16:44.527 23:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.527 23:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.465 23:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.465 23:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:45.465 23:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:16:45.465 23:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:45.724 [2024-05-14 23:32:08.932125] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.982 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:45.982 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.982 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:45.982 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:45.982 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:45.983 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.983 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.983 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.983 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.983 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.983 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.983 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.241 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.241 "name": "Existed_Raid", 00:16:46.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.241 "strip_size_kb": 0, 00:16:46.241 "state": "configuring", 00:16:46.241 "raid_level": "raid1", 00:16:46.241 "superblock": false, 00:16:46.241 "num_base_bdevs": 3, 00:16:46.241 "num_base_bdevs_discovered": 1, 00:16:46.241 "num_base_bdevs_operational": 3, 00:16:46.241 "base_bdevs_list": [ 00:16:46.241 { 00:16:46.241 "name": null, 00:16:46.241 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:46.241 "is_configured": false, 00:16:46.241 "data_offset": 0, 00:16:46.241 "data_size": 65536 00:16:46.241 }, 00:16:46.241 { 00:16:46.241 "name": null, 00:16:46.241 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:46.241 "is_configured": false, 00:16:46.241 "data_offset": 0, 00:16:46.241 "data_size": 65536 00:16:46.242 }, 00:16:46.242 { 00:16:46.242 "name": "BaseBdev3", 00:16:46.242 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:46.242 "is_configured": true, 00:16:46.242 "data_offset": 0, 00:16:46.242 "data_size": 65536 00:16:46.242 } 00:16:46.242 ] 00:16:46.242 }' 00:16:46.242 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.242 23:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.809 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.809 23:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:47.067 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:16:47.067 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:47.324 [2024-05-14 23:32:10.412338] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.324 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:47.324 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.324 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.324 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:47.325 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:47.325 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:47.325 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.325 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.325 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.325 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.325 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.325 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.582 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.582 "name": "Existed_Raid", 00:16:47.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.582 "strip_size_kb": 0, 00:16:47.582 "state": "configuring", 00:16:47.582 "raid_level": "raid1", 00:16:47.582 "superblock": false, 00:16:47.582 "num_base_bdevs": 3, 00:16:47.582 "num_base_bdevs_discovered": 2, 00:16:47.582 "num_base_bdevs_operational": 3, 00:16:47.582 "base_bdevs_list": [ 00:16:47.582 { 00:16:47.582 "name": null, 00:16:47.582 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:47.582 "is_configured": false, 00:16:47.582 "data_offset": 0, 00:16:47.582 "data_size": 65536 00:16:47.582 }, 00:16:47.582 { 00:16:47.582 "name": "BaseBdev2", 00:16:47.582 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:47.582 "is_configured": true, 00:16:47.582 "data_offset": 0, 00:16:47.582 "data_size": 65536 00:16:47.582 }, 00:16:47.582 { 00:16:47.582 "name": "BaseBdev3", 00:16:47.582 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:47.582 "is_configured": true, 00:16:47.582 "data_offset": 0, 00:16:47.582 "data_size": 65536 00:16:47.582 } 00:16:47.582 ] 00:16:47.582 }' 00:16:47.582 23:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.582 23:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.148 23:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.148 23:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:48.406 23:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:16:48.406 23:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.406 23:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:48.406 23:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 35cc6de0-a3fd-4222-8de1-d3df32204e61 00:16:48.664 [2024-05-14 23:32:11.925761] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:48.664 [2024-05-14 23:32:11.925803] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:16:48.664 [2024-05-14 23:32:11.925812] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:48.664 [2024-05-14 23:32:11.925912] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:48.664 [2024-05-14 23:32:11.926109] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:16:48.664 [2024-05-14 23:32:11.926123] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:16:48.664 NewBaseBdev 00:16:48.664 [2024-05-14 23:32:11.926610] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.664 23:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:16:48.664 23:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:48.664 23:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:48.664 23:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:48.664 23:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:48.664 23:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:48.664 23:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.923 23:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:49.181 [ 00:16:49.181 { 00:16:49.181 "name": "NewBaseBdev", 00:16:49.181 "aliases": [ 00:16:49.181 "35cc6de0-a3fd-4222-8de1-d3df32204e61" 00:16:49.181 ], 00:16:49.181 "product_name": "Malloc disk", 00:16:49.181 "block_size": 512, 00:16:49.181 "num_blocks": 65536, 00:16:49.181 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:49.181 "assigned_rate_limits": { 00:16:49.181 "rw_ios_per_sec": 0, 00:16:49.181 "rw_mbytes_per_sec": 0, 00:16:49.181 "r_mbytes_per_sec": 0, 00:16:49.181 "w_mbytes_per_sec": 0 00:16:49.181 }, 00:16:49.181 "claimed": true, 00:16:49.181 "claim_type": "exclusive_write", 00:16:49.181 "zoned": false, 00:16:49.181 "supported_io_types": { 00:16:49.181 "read": true, 00:16:49.181 "write": true, 00:16:49.181 "unmap": true, 00:16:49.181 "write_zeroes": true, 00:16:49.181 "flush": true, 00:16:49.181 "reset": true, 00:16:49.181 "compare": false, 00:16:49.181 "compare_and_write": false, 00:16:49.181 "abort": true, 00:16:49.181 "nvme_admin": false, 00:16:49.181 "nvme_io": false 00:16:49.181 }, 00:16:49.181 "memory_domains": [ 00:16:49.181 { 00:16:49.181 "dma_device_id": "system", 00:16:49.181 "dma_device_type": 1 00:16:49.181 }, 00:16:49.181 { 00:16:49.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.181 "dma_device_type": 2 00:16:49.181 } 00:16:49.181 ], 00:16:49.181 "driver_specific": {} 00:16:49.181 } 00:16:49.181 ] 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.181 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.438 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.438 "name": "Existed_Raid", 00:16:49.438 "uuid": "d96e702a-823c-48cc-bb16-b1c7b30bb513", 00:16:49.438 "strip_size_kb": 0, 00:16:49.438 "state": "online", 00:16:49.438 "raid_level": "raid1", 00:16:49.438 "superblock": false, 00:16:49.438 "num_base_bdevs": 3, 00:16:49.438 "num_base_bdevs_discovered": 3, 00:16:49.438 "num_base_bdevs_operational": 3, 00:16:49.438 "base_bdevs_list": [ 00:16:49.438 { 00:16:49.438 "name": "NewBaseBdev", 00:16:49.438 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:49.438 "is_configured": true, 00:16:49.438 "data_offset": 0, 00:16:49.438 "data_size": 65536 00:16:49.438 }, 00:16:49.438 { 00:16:49.438 "name": "BaseBdev2", 00:16:49.438 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:49.438 "is_configured": true, 00:16:49.438 "data_offset": 0, 00:16:49.438 "data_size": 65536 00:16:49.438 }, 00:16:49.438 { 00:16:49.438 "name": "BaseBdev3", 00:16:49.439 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:49.439 "is_configured": true, 00:16:49.439 "data_offset": 0, 00:16:49.439 "data_size": 65536 00:16:49.439 } 00:16:49.439 ] 00:16:49.439 }' 00:16:49.439 23:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.439 23:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.004 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.005 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:16:50.005 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:16:50.005 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:16:50.005 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:16:50.005 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:16:50.005 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:50.005 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:16:50.263 [2024-05-14 23:32:13.414324] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.263 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:16:50.263 "name": "Existed_Raid", 00:16:50.263 "aliases": [ 00:16:50.263 "d96e702a-823c-48cc-bb16-b1c7b30bb513" 00:16:50.263 ], 00:16:50.263 "product_name": "Raid Volume", 00:16:50.263 "block_size": 512, 00:16:50.263 "num_blocks": 65536, 00:16:50.263 "uuid": "d96e702a-823c-48cc-bb16-b1c7b30bb513", 00:16:50.263 "assigned_rate_limits": { 00:16:50.263 "rw_ios_per_sec": 0, 00:16:50.263 "rw_mbytes_per_sec": 0, 00:16:50.263 "r_mbytes_per_sec": 0, 00:16:50.263 "w_mbytes_per_sec": 0 00:16:50.263 }, 00:16:50.263 "claimed": false, 00:16:50.263 "zoned": false, 00:16:50.263 "supported_io_types": { 00:16:50.263 "read": true, 00:16:50.263 "write": true, 00:16:50.263 "unmap": false, 00:16:50.263 "write_zeroes": true, 00:16:50.263 "flush": false, 00:16:50.263 "reset": true, 00:16:50.263 "compare": false, 00:16:50.263 "compare_and_write": false, 00:16:50.263 "abort": false, 00:16:50.263 "nvme_admin": false, 00:16:50.263 "nvme_io": false 00:16:50.263 }, 00:16:50.263 "memory_domains": [ 00:16:50.263 { 00:16:50.263 "dma_device_id": "system", 00:16:50.263 "dma_device_type": 1 00:16:50.263 }, 00:16:50.263 { 00:16:50.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.263 "dma_device_type": 2 00:16:50.263 }, 00:16:50.263 { 00:16:50.263 "dma_device_id": "system", 00:16:50.263 "dma_device_type": 1 00:16:50.263 }, 00:16:50.263 { 00:16:50.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.263 "dma_device_type": 2 00:16:50.263 }, 00:16:50.263 { 00:16:50.263 "dma_device_id": "system", 00:16:50.263 "dma_device_type": 1 00:16:50.263 }, 00:16:50.263 { 00:16:50.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.263 "dma_device_type": 2 00:16:50.263 } 00:16:50.263 ], 00:16:50.263 "driver_specific": { 00:16:50.264 "raid": { 00:16:50.264 "uuid": "d96e702a-823c-48cc-bb16-b1c7b30bb513", 00:16:50.264 "strip_size_kb": 0, 00:16:50.264 "state": "online", 00:16:50.264 "raid_level": "raid1", 00:16:50.264 "superblock": false, 00:16:50.264 "num_base_bdevs": 3, 00:16:50.264 "num_base_bdevs_discovered": 3, 00:16:50.264 "num_base_bdevs_operational": 3, 00:16:50.264 "base_bdevs_list": [ 00:16:50.264 { 00:16:50.264 "name": "NewBaseBdev", 00:16:50.264 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:50.264 "is_configured": true, 00:16:50.264 "data_offset": 0, 00:16:50.264 "data_size": 65536 00:16:50.264 }, 00:16:50.264 { 00:16:50.264 "name": "BaseBdev2", 00:16:50.264 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:50.264 "is_configured": true, 00:16:50.264 "data_offset": 0, 00:16:50.264 "data_size": 65536 00:16:50.264 }, 00:16:50.264 { 00:16:50.264 "name": "BaseBdev3", 00:16:50.264 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:50.264 "is_configured": true, 00:16:50.264 "data_offset": 0, 00:16:50.264 "data_size": 65536 00:16:50.264 } 00:16:50.264 ] 00:16:50.264 } 00:16:50.264 } 00:16:50.264 }' 00:16:50.264 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.264 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:16:50.264 BaseBdev2 00:16:50.264 BaseBdev3' 00:16:50.264 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:50.264 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:50.264 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:50.522 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:50.522 "name": "NewBaseBdev", 00:16:50.522 "aliases": [ 00:16:50.522 "35cc6de0-a3fd-4222-8de1-d3df32204e61" 00:16:50.522 ], 00:16:50.522 "product_name": "Malloc disk", 00:16:50.522 "block_size": 512, 00:16:50.522 "num_blocks": 65536, 00:16:50.522 "uuid": "35cc6de0-a3fd-4222-8de1-d3df32204e61", 00:16:50.522 "assigned_rate_limits": { 00:16:50.522 "rw_ios_per_sec": 0, 00:16:50.522 "rw_mbytes_per_sec": 0, 00:16:50.522 "r_mbytes_per_sec": 0, 00:16:50.522 "w_mbytes_per_sec": 0 00:16:50.522 }, 00:16:50.522 "claimed": true, 00:16:50.522 "claim_type": "exclusive_write", 00:16:50.522 "zoned": false, 00:16:50.522 "supported_io_types": { 00:16:50.522 "read": true, 00:16:50.522 "write": true, 00:16:50.522 "unmap": true, 00:16:50.522 "write_zeroes": true, 00:16:50.522 "flush": true, 00:16:50.522 "reset": true, 00:16:50.522 "compare": false, 00:16:50.522 "compare_and_write": false, 00:16:50.522 "abort": true, 00:16:50.522 "nvme_admin": false, 00:16:50.522 "nvme_io": false 00:16:50.522 }, 00:16:50.522 "memory_domains": [ 00:16:50.522 { 00:16:50.522 "dma_device_id": "system", 00:16:50.522 "dma_device_type": 1 00:16:50.522 }, 00:16:50.522 { 00:16:50.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.522 "dma_device_type": 2 00:16:50.522 } 00:16:50.522 ], 00:16:50.522 "driver_specific": {} 00:16:50.522 }' 00:16:50.522 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:50.522 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:50.522 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:50.522 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:50.781 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:50.781 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.781 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:50.781 23:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:50.781 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.781 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:50.781 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:51.039 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:51.039 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:51.039 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:51.039 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:51.039 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:51.039 "name": "BaseBdev2", 00:16:51.039 "aliases": [ 00:16:51.039 "92638523-3a3f-467d-97e5-c6abc6fb3e07" 00:16:51.039 ], 00:16:51.039 "product_name": "Malloc disk", 00:16:51.039 "block_size": 512, 00:16:51.039 "num_blocks": 65536, 00:16:51.039 "uuid": "92638523-3a3f-467d-97e5-c6abc6fb3e07", 00:16:51.039 "assigned_rate_limits": { 00:16:51.039 "rw_ios_per_sec": 0, 00:16:51.039 "rw_mbytes_per_sec": 0, 00:16:51.039 "r_mbytes_per_sec": 0, 00:16:51.039 "w_mbytes_per_sec": 0 00:16:51.039 }, 00:16:51.039 "claimed": true, 00:16:51.039 "claim_type": "exclusive_write", 00:16:51.039 "zoned": false, 00:16:51.039 "supported_io_types": { 00:16:51.039 "read": true, 00:16:51.039 "write": true, 00:16:51.039 "unmap": true, 00:16:51.039 "write_zeroes": true, 00:16:51.039 "flush": true, 00:16:51.039 "reset": true, 00:16:51.039 "compare": false, 00:16:51.039 "compare_and_write": false, 00:16:51.039 "abort": true, 00:16:51.039 "nvme_admin": false, 00:16:51.039 "nvme_io": false 00:16:51.039 }, 00:16:51.039 "memory_domains": [ 00:16:51.039 { 00:16:51.039 "dma_device_id": "system", 00:16:51.039 "dma_device_type": 1 00:16:51.039 }, 00:16:51.039 { 00:16:51.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.039 "dma_device_type": 2 00:16:51.039 } 00:16:51.039 ], 00:16:51.039 "driver_specific": {} 00:16:51.039 }' 00:16:51.039 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:51.297 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:51.297 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:51.297 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:51.297 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:51.297 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.297 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:51.297 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:51.556 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.556 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:51.556 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:51.556 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:51.556 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:16:51.556 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:16:51.556 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:51.815 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:16:51.815 "name": "BaseBdev3", 00:16:51.815 "aliases": [ 00:16:51.815 "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9" 00:16:51.815 ], 00:16:51.815 "product_name": "Malloc disk", 00:16:51.815 "block_size": 512, 00:16:51.815 "num_blocks": 65536, 00:16:51.815 "uuid": "78df0c14-cbc3-4717-bc1a-b8bd3a900ae9", 00:16:51.815 "assigned_rate_limits": { 00:16:51.815 "rw_ios_per_sec": 0, 00:16:51.815 "rw_mbytes_per_sec": 0, 00:16:51.815 "r_mbytes_per_sec": 0, 00:16:51.815 "w_mbytes_per_sec": 0 00:16:51.815 }, 00:16:51.815 "claimed": true, 00:16:51.815 "claim_type": "exclusive_write", 00:16:51.815 "zoned": false, 00:16:51.815 "supported_io_types": { 00:16:51.815 "read": true, 00:16:51.815 "write": true, 00:16:51.815 "unmap": true, 00:16:51.815 "write_zeroes": true, 00:16:51.815 "flush": true, 00:16:51.815 "reset": true, 00:16:51.815 "compare": false, 00:16:51.815 "compare_and_write": false, 00:16:51.815 "abort": true, 00:16:51.815 "nvme_admin": false, 00:16:51.815 "nvme_io": false 00:16:51.815 }, 00:16:51.815 "memory_domains": [ 00:16:51.815 { 00:16:51.815 "dma_device_id": "system", 00:16:51.815 "dma_device_type": 1 00:16:51.815 }, 00:16:51.815 { 00:16:51.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.815 "dma_device_type": 2 00:16:51.815 } 00:16:51.815 ], 00:16:51.815 "driver_specific": {} 00:16:51.815 }' 00:16:51.815 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:51.815 23:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:16:51.815 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:16:51.815 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:51.815 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:16:52.074 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.074 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:52.074 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:16:52.074 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.074 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:52.074 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:16:52.074 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:16:52.074 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:52.332 [2024-05-14 23:32:15.526430] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.332 [2024-05-14 23:32:15.526464] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.332 [2024-05-14 23:32:15.526532] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.332 [2024-05-14 23:32:15.526752] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.332 [2024-05-14 23:32:15.526766] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 61400 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 61400 ']' 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 61400 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61400 00:16:52.332 killing process with pid 61400 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61400' 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 61400 00:16:52.332 23:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 61400 00:16:52.332 [2024-05-14 23:32:15.568946] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.590 [2024-05-14 23:32:15.816378] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.968 ************************************ 00:16:53.968 END TEST raid_state_function_test 00:16:53.968 ************************************ 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:16:53.968 00:16:53.968 real 0m30.401s 00:16:53.968 user 0m57.323s 00:16:53.968 sys 0m3.065s 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.968 23:32:17 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:53.968 23:32:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:53.968 23:32:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.968 23:32:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.968 ************************************ 00:16:53.968 START TEST raid_state_function_test_sb 00:16:53.968 ************************************ 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:16:53.968 Process raid pid: 62390 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:16:53.968 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=62390 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 62390' 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 62390 /var/tmp/spdk-raid.sock 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 62390 ']' 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:53.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:53.969 23:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.969 [2024-05-14 23:32:17.219496] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:16:53.969 [2024-05-14 23:32:17.219701] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.227 [2024-05-14 23:32:17.374945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.486 [2024-05-14 23:32:17.599138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.745 [2024-05-14 23:32:17.811432] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.004 23:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:55.004 23:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:16:55.004 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:55.263 [2024-05-14 23:32:18.334576] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.263 [2024-05-14 23:32:18.334665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.263 [2024-05-14 23:32:18.334692] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.263 [2024-05-14 23:32:18.334737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.263 [2024-05-14 23:32:18.334746] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.263 [2024-05-14 23:32:18.334792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.263 "name": "Existed_Raid", 00:16:55.263 "uuid": "5b28c000-ba7d-4985-813e-24f21677fc21", 00:16:55.263 "strip_size_kb": 0, 00:16:55.263 "state": "configuring", 00:16:55.263 "raid_level": "raid1", 00:16:55.263 "superblock": true, 00:16:55.263 "num_base_bdevs": 3, 00:16:55.263 "num_base_bdevs_discovered": 0, 00:16:55.263 "num_base_bdevs_operational": 3, 00:16:55.263 "base_bdevs_list": [ 00:16:55.263 { 00:16:55.263 "name": "BaseBdev1", 00:16:55.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.263 "is_configured": false, 00:16:55.263 "data_offset": 0, 00:16:55.263 "data_size": 0 00:16:55.263 }, 00:16:55.263 { 00:16:55.263 "name": "BaseBdev2", 00:16:55.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.263 "is_configured": false, 00:16:55.263 "data_offset": 0, 00:16:55.263 "data_size": 0 00:16:55.263 }, 00:16:55.263 { 00:16:55.263 "name": "BaseBdev3", 00:16:55.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.263 "is_configured": false, 00:16:55.263 "data_offset": 0, 00:16:55.263 "data_size": 0 00:16:55.263 } 00:16:55.263 ] 00:16:55.263 }' 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.263 23:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.203 23:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.203 [2024-05-14 23:32:19.326626] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.203 [2024-05-14 23:32:19.326680] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:16:56.203 23:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:56.464 [2024-05-14 23:32:19.526728] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.464 [2024-05-14 23:32:19.526812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.464 [2024-05-14 23:32:19.526838] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.464 [2024-05-14 23:32:19.526870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.464 [2024-05-14 23:32:19.526881] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.464 [2024-05-14 23:32:19.526906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.464 23:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.722 [2024-05-14 23:32:19.765689] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.722 BaseBdev1 00:16:56.722 23:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:16:56.722 23:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:56.722 23:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:56.722 23:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:56.722 23:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:56.722 23:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:56.722 23:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.722 23:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.981 [ 00:16:56.982 { 00:16:56.982 "name": "BaseBdev1", 00:16:56.982 "aliases": [ 00:16:56.982 "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df" 00:16:56.982 ], 00:16:56.982 "product_name": "Malloc disk", 00:16:56.982 "block_size": 512, 00:16:56.982 "num_blocks": 65536, 00:16:56.982 "uuid": "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df", 00:16:56.982 "assigned_rate_limits": { 00:16:56.982 "rw_ios_per_sec": 0, 00:16:56.982 "rw_mbytes_per_sec": 0, 00:16:56.982 "r_mbytes_per_sec": 0, 00:16:56.982 "w_mbytes_per_sec": 0 00:16:56.982 }, 00:16:56.982 "claimed": true, 00:16:56.982 "claim_type": "exclusive_write", 00:16:56.982 "zoned": false, 00:16:56.982 "supported_io_types": { 00:16:56.982 "read": true, 00:16:56.982 "write": true, 00:16:56.982 "unmap": true, 00:16:56.982 "write_zeroes": true, 00:16:56.982 "flush": true, 00:16:56.982 "reset": true, 00:16:56.982 "compare": false, 00:16:56.982 "compare_and_write": false, 00:16:56.982 "abort": true, 00:16:56.982 "nvme_admin": false, 00:16:56.982 "nvme_io": false 00:16:56.982 }, 00:16:56.982 "memory_domains": [ 00:16:56.982 { 00:16:56.982 "dma_device_id": "system", 00:16:56.982 "dma_device_type": 1 00:16:56.982 }, 00:16:56.982 { 00:16:56.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.982 "dma_device_type": 2 00:16:56.982 } 00:16:56.982 ], 00:16:56.982 "driver_specific": {} 00:16:56.982 } 00:16:56.982 ] 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.982 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.241 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.241 "name": "Existed_Raid", 00:16:57.241 "uuid": "e0bd79cd-7090-44f4-b5c0-2b96fb219f4d", 00:16:57.241 "strip_size_kb": 0, 00:16:57.241 "state": "configuring", 00:16:57.241 "raid_level": "raid1", 00:16:57.241 "superblock": true, 00:16:57.241 "num_base_bdevs": 3, 00:16:57.241 "num_base_bdevs_discovered": 1, 00:16:57.241 "num_base_bdevs_operational": 3, 00:16:57.241 "base_bdevs_list": [ 00:16:57.241 { 00:16:57.241 "name": "BaseBdev1", 00:16:57.241 "uuid": "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df", 00:16:57.241 "is_configured": true, 00:16:57.241 "data_offset": 2048, 00:16:57.241 "data_size": 63488 00:16:57.241 }, 00:16:57.241 { 00:16:57.241 "name": "BaseBdev2", 00:16:57.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.241 "is_configured": false, 00:16:57.241 "data_offset": 0, 00:16:57.241 "data_size": 0 00:16:57.241 }, 00:16:57.241 { 00:16:57.241 "name": "BaseBdev3", 00:16:57.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.241 "is_configured": false, 00:16:57.241 "data_offset": 0, 00:16:57.241 "data_size": 0 00:16:57.241 } 00:16:57.241 ] 00:16:57.241 }' 00:16:57.241 23:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.241 23:32:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.809 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:58.067 [2024-05-14 23:32:21.310049] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.067 [2024-05-14 23:32:21.310122] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:16:58.067 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:58.326 [2024-05-14 23:32:21.594162] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.326 [2024-05-14 23:32:21.595618] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.326 [2024-05-14 23:32:21.595682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.326 [2024-05-14 23:32:21.595695] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:58.326 [2024-05-14 23:32:21.595723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.326 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.586 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.586 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.586 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.586 "name": "Existed_Raid", 00:16:58.586 "uuid": "24d1d413-0de4-43be-ace0-0567f698bf32", 00:16:58.586 "strip_size_kb": 0, 00:16:58.586 "state": "configuring", 00:16:58.586 "raid_level": "raid1", 00:16:58.586 "superblock": true, 00:16:58.586 "num_base_bdevs": 3, 00:16:58.586 "num_base_bdevs_discovered": 1, 00:16:58.586 "num_base_bdevs_operational": 3, 00:16:58.586 "base_bdevs_list": [ 00:16:58.586 { 00:16:58.586 "name": "BaseBdev1", 00:16:58.586 "uuid": "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df", 00:16:58.586 "is_configured": true, 00:16:58.586 "data_offset": 2048, 00:16:58.586 "data_size": 63488 00:16:58.586 }, 00:16:58.586 { 00:16:58.586 "name": "BaseBdev2", 00:16:58.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.586 "is_configured": false, 00:16:58.586 "data_offset": 0, 00:16:58.586 "data_size": 0 00:16:58.586 }, 00:16:58.586 { 00:16:58.586 "name": "BaseBdev3", 00:16:58.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.586 "is_configured": false, 00:16:58.586 "data_offset": 0, 00:16:58.586 "data_size": 0 00:16:58.586 } 00:16:58.586 ] 00:16:58.586 }' 00:16:58.586 23:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.586 23:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.534 23:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.534 [2024-05-14 23:32:22.785356] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.534 BaseBdev2 00:16:59.534 23:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:16:59.534 23:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:59.534 23:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:59.534 23:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:59.534 23:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:59.534 23:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:59.534 23:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.818 23:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:00.077 [ 00:17:00.077 { 00:17:00.077 "name": "BaseBdev2", 00:17:00.077 "aliases": [ 00:17:00.077 "55d4cbd6-2dad-4031-8227-8798d2f26085" 00:17:00.077 ], 00:17:00.077 "product_name": "Malloc disk", 00:17:00.077 "block_size": 512, 00:17:00.077 "num_blocks": 65536, 00:17:00.077 "uuid": "55d4cbd6-2dad-4031-8227-8798d2f26085", 00:17:00.077 "assigned_rate_limits": { 00:17:00.077 "rw_ios_per_sec": 0, 00:17:00.077 "rw_mbytes_per_sec": 0, 00:17:00.077 "r_mbytes_per_sec": 0, 00:17:00.077 "w_mbytes_per_sec": 0 00:17:00.077 }, 00:17:00.077 "claimed": true, 00:17:00.077 "claim_type": "exclusive_write", 00:17:00.077 "zoned": false, 00:17:00.077 "supported_io_types": { 00:17:00.077 "read": true, 00:17:00.077 "write": true, 00:17:00.077 "unmap": true, 00:17:00.077 "write_zeroes": true, 00:17:00.077 "flush": true, 00:17:00.077 "reset": true, 00:17:00.077 "compare": false, 00:17:00.077 "compare_and_write": false, 00:17:00.077 "abort": true, 00:17:00.077 "nvme_admin": false, 00:17:00.077 "nvme_io": false 00:17:00.077 }, 00:17:00.077 "memory_domains": [ 00:17:00.077 { 00:17:00.077 "dma_device_id": "system", 00:17:00.077 "dma_device_type": 1 00:17:00.077 }, 00:17:00.077 { 00:17:00.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.077 "dma_device_type": 2 00:17:00.077 } 00:17:00.077 ], 00:17:00.077 "driver_specific": {} 00:17:00.077 } 00:17:00.077 ] 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.077 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.339 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.339 "name": "Existed_Raid", 00:17:00.339 "uuid": "24d1d413-0de4-43be-ace0-0567f698bf32", 00:17:00.340 "strip_size_kb": 0, 00:17:00.340 "state": "configuring", 00:17:00.340 "raid_level": "raid1", 00:17:00.340 "superblock": true, 00:17:00.340 "num_base_bdevs": 3, 00:17:00.340 "num_base_bdevs_discovered": 2, 00:17:00.340 "num_base_bdevs_operational": 3, 00:17:00.340 "base_bdevs_list": [ 00:17:00.340 { 00:17:00.340 "name": "BaseBdev1", 00:17:00.340 "uuid": "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df", 00:17:00.340 "is_configured": true, 00:17:00.340 "data_offset": 2048, 00:17:00.340 "data_size": 63488 00:17:00.340 }, 00:17:00.340 { 00:17:00.340 "name": "BaseBdev2", 00:17:00.340 "uuid": "55d4cbd6-2dad-4031-8227-8798d2f26085", 00:17:00.340 "is_configured": true, 00:17:00.340 "data_offset": 2048, 00:17:00.340 "data_size": 63488 00:17:00.340 }, 00:17:00.340 { 00:17:00.340 "name": "BaseBdev3", 00:17:00.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.340 "is_configured": false, 00:17:00.340 "data_offset": 0, 00:17:00.340 "data_size": 0 00:17:00.340 } 00:17:00.340 ] 00:17:00.340 }' 00:17:00.340 23:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.340 23:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.911 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:01.170 [2024-05-14 23:32:24.398782] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.170 [2024-05-14 23:32:24.398976] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:17:01.170 [2024-05-14 23:32:24.398992] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:01.170 [2024-05-14 23:32:24.399095] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:01.170 BaseBdev3 00:17:01.170 [2024-05-14 23:32:24.399617] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:17:01.170 [2024-05-14 23:32:24.399635] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:17:01.170 [2024-05-14 23:32:24.399740] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.170 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:17:01.170 23:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:01.170 23:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:01.170 23:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:01.170 23:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:01.170 23:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:01.170 23:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:01.429 23:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:01.688 [ 00:17:01.688 { 00:17:01.688 "name": "BaseBdev3", 00:17:01.688 "aliases": [ 00:17:01.688 "7926343d-1654-48b9-bd66-165fc17d4da3" 00:17:01.688 ], 00:17:01.688 "product_name": "Malloc disk", 00:17:01.688 "block_size": 512, 00:17:01.688 "num_blocks": 65536, 00:17:01.688 "uuid": "7926343d-1654-48b9-bd66-165fc17d4da3", 00:17:01.688 "assigned_rate_limits": { 00:17:01.688 "rw_ios_per_sec": 0, 00:17:01.688 "rw_mbytes_per_sec": 0, 00:17:01.688 "r_mbytes_per_sec": 0, 00:17:01.688 "w_mbytes_per_sec": 0 00:17:01.688 }, 00:17:01.688 "claimed": true, 00:17:01.688 "claim_type": "exclusive_write", 00:17:01.688 "zoned": false, 00:17:01.688 "supported_io_types": { 00:17:01.688 "read": true, 00:17:01.688 "write": true, 00:17:01.688 "unmap": true, 00:17:01.688 "write_zeroes": true, 00:17:01.688 "flush": true, 00:17:01.688 "reset": true, 00:17:01.688 "compare": false, 00:17:01.688 "compare_and_write": false, 00:17:01.688 "abort": true, 00:17:01.688 "nvme_admin": false, 00:17:01.688 "nvme_io": false 00:17:01.688 }, 00:17:01.688 "memory_domains": [ 00:17:01.688 { 00:17:01.688 "dma_device_id": "system", 00:17:01.688 "dma_device_type": 1 00:17:01.688 }, 00:17:01.688 { 00:17:01.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.688 "dma_device_type": 2 00:17:01.688 } 00:17:01.688 ], 00:17:01.688 "driver_specific": {} 00:17:01.688 } 00:17:01.688 ] 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.688 23:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.947 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.947 "name": "Existed_Raid", 00:17:01.947 "uuid": "24d1d413-0de4-43be-ace0-0567f698bf32", 00:17:01.947 "strip_size_kb": 0, 00:17:01.947 "state": "online", 00:17:01.947 "raid_level": "raid1", 00:17:01.947 "superblock": true, 00:17:01.947 "num_base_bdevs": 3, 00:17:01.947 "num_base_bdevs_discovered": 3, 00:17:01.947 "num_base_bdevs_operational": 3, 00:17:01.947 "base_bdevs_list": [ 00:17:01.947 { 00:17:01.947 "name": "BaseBdev1", 00:17:01.947 "uuid": "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df", 00:17:01.947 "is_configured": true, 00:17:01.947 "data_offset": 2048, 00:17:01.947 "data_size": 63488 00:17:01.947 }, 00:17:01.947 { 00:17:01.947 "name": "BaseBdev2", 00:17:01.947 "uuid": "55d4cbd6-2dad-4031-8227-8798d2f26085", 00:17:01.947 "is_configured": true, 00:17:01.947 "data_offset": 2048, 00:17:01.947 "data_size": 63488 00:17:01.947 }, 00:17:01.947 { 00:17:01.947 "name": "BaseBdev3", 00:17:01.947 "uuid": "7926343d-1654-48b9-bd66-165fc17d4da3", 00:17:01.947 "is_configured": true, 00:17:01.947 "data_offset": 2048, 00:17:01.947 "data_size": 63488 00:17:01.947 } 00:17:01.947 ] 00:17:01.947 }' 00:17:01.947 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.947 23:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.884 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.884 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:02.884 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:02.884 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:02.884 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:02.884 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:17:02.884 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:02.884 23:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:02.884 [2024-05-14 23:32:26.135239] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.884 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:02.884 "name": "Existed_Raid", 00:17:02.884 "aliases": [ 00:17:02.884 "24d1d413-0de4-43be-ace0-0567f698bf32" 00:17:02.884 ], 00:17:02.884 "product_name": "Raid Volume", 00:17:02.884 "block_size": 512, 00:17:02.884 "num_blocks": 63488, 00:17:02.884 "uuid": "24d1d413-0de4-43be-ace0-0567f698bf32", 00:17:02.884 "assigned_rate_limits": { 00:17:02.884 "rw_ios_per_sec": 0, 00:17:02.884 "rw_mbytes_per_sec": 0, 00:17:02.884 "r_mbytes_per_sec": 0, 00:17:02.884 "w_mbytes_per_sec": 0 00:17:02.884 }, 00:17:02.884 "claimed": false, 00:17:02.884 "zoned": false, 00:17:02.884 "supported_io_types": { 00:17:02.884 "read": true, 00:17:02.884 "write": true, 00:17:02.884 "unmap": false, 00:17:02.884 "write_zeroes": true, 00:17:02.884 "flush": false, 00:17:02.884 "reset": true, 00:17:02.884 "compare": false, 00:17:02.884 "compare_and_write": false, 00:17:02.884 "abort": false, 00:17:02.884 "nvme_admin": false, 00:17:02.884 "nvme_io": false 00:17:02.884 }, 00:17:02.884 "memory_domains": [ 00:17:02.884 { 00:17:02.884 "dma_device_id": "system", 00:17:02.884 "dma_device_type": 1 00:17:02.884 }, 00:17:02.884 { 00:17:02.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.884 "dma_device_type": 2 00:17:02.884 }, 00:17:02.884 { 00:17:02.884 "dma_device_id": "system", 00:17:02.884 "dma_device_type": 1 00:17:02.884 }, 00:17:02.884 { 00:17:02.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.884 "dma_device_type": 2 00:17:02.884 }, 00:17:02.884 { 00:17:02.884 "dma_device_id": "system", 00:17:02.884 "dma_device_type": 1 00:17:02.884 }, 00:17:02.884 { 00:17:02.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.884 "dma_device_type": 2 00:17:02.884 } 00:17:02.884 ], 00:17:02.884 "driver_specific": { 00:17:02.884 "raid": { 00:17:02.884 "uuid": "24d1d413-0de4-43be-ace0-0567f698bf32", 00:17:02.884 "strip_size_kb": 0, 00:17:02.884 "state": "online", 00:17:02.884 "raid_level": "raid1", 00:17:02.884 "superblock": true, 00:17:02.884 "num_base_bdevs": 3, 00:17:02.884 "num_base_bdevs_discovered": 3, 00:17:02.884 "num_base_bdevs_operational": 3, 00:17:02.884 "base_bdevs_list": [ 00:17:02.884 { 00:17:02.884 "name": "BaseBdev1", 00:17:02.884 "uuid": "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df", 00:17:02.884 "is_configured": true, 00:17:02.884 "data_offset": 2048, 00:17:02.884 "data_size": 63488 00:17:02.884 }, 00:17:02.884 { 00:17:02.884 "name": "BaseBdev2", 00:17:02.884 "uuid": "55d4cbd6-2dad-4031-8227-8798d2f26085", 00:17:02.884 "is_configured": true, 00:17:02.884 "data_offset": 2048, 00:17:02.884 "data_size": 63488 00:17:02.884 }, 00:17:02.884 { 00:17:02.884 "name": "BaseBdev3", 00:17:02.884 "uuid": "7926343d-1654-48b9-bd66-165fc17d4da3", 00:17:02.884 "is_configured": true, 00:17:02.884 "data_offset": 2048, 00:17:02.884 "data_size": 63488 00:17:02.884 } 00:17:02.884 ] 00:17:02.884 } 00:17:02.884 } 00:17:02.884 }' 00:17:02.884 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.144 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:17:03.144 BaseBdev2 00:17:03.144 BaseBdev3' 00:17:03.144 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:03.144 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:03.144 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:03.439 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:03.439 "name": "BaseBdev1", 00:17:03.439 "aliases": [ 00:17:03.439 "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df" 00:17:03.439 ], 00:17:03.439 "product_name": "Malloc disk", 00:17:03.439 "block_size": 512, 00:17:03.439 "num_blocks": 65536, 00:17:03.439 "uuid": "c5f51ae0-80d3-457e-a6d5-e6d73fc4e2df", 00:17:03.439 "assigned_rate_limits": { 00:17:03.439 "rw_ios_per_sec": 0, 00:17:03.439 "rw_mbytes_per_sec": 0, 00:17:03.439 "r_mbytes_per_sec": 0, 00:17:03.439 "w_mbytes_per_sec": 0 00:17:03.439 }, 00:17:03.439 "claimed": true, 00:17:03.439 "claim_type": "exclusive_write", 00:17:03.439 "zoned": false, 00:17:03.439 "supported_io_types": { 00:17:03.439 "read": true, 00:17:03.439 "write": true, 00:17:03.439 "unmap": true, 00:17:03.439 "write_zeroes": true, 00:17:03.439 "flush": true, 00:17:03.439 "reset": true, 00:17:03.439 "compare": false, 00:17:03.439 "compare_and_write": false, 00:17:03.439 "abort": true, 00:17:03.439 "nvme_admin": false, 00:17:03.439 "nvme_io": false 00:17:03.439 }, 00:17:03.439 "memory_domains": [ 00:17:03.439 { 00:17:03.440 "dma_device_id": "system", 00:17:03.440 "dma_device_type": 1 00:17:03.440 }, 00:17:03.440 { 00:17:03.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.440 "dma_device_type": 2 00:17:03.440 } 00:17:03.440 ], 00:17:03.440 "driver_specific": {} 00:17:03.440 }' 00:17:03.440 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:03.440 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:03.440 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:03.440 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:03.440 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:03.440 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.440 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:03.440 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:03.699 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.699 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:03.699 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:03.699 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:03.699 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:03.699 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:03.699 23:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:03.957 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:03.957 "name": "BaseBdev2", 00:17:03.957 "aliases": [ 00:17:03.957 "55d4cbd6-2dad-4031-8227-8798d2f26085" 00:17:03.958 ], 00:17:03.958 "product_name": "Malloc disk", 00:17:03.958 "block_size": 512, 00:17:03.958 "num_blocks": 65536, 00:17:03.958 "uuid": "55d4cbd6-2dad-4031-8227-8798d2f26085", 00:17:03.958 "assigned_rate_limits": { 00:17:03.958 "rw_ios_per_sec": 0, 00:17:03.958 "rw_mbytes_per_sec": 0, 00:17:03.958 "r_mbytes_per_sec": 0, 00:17:03.958 "w_mbytes_per_sec": 0 00:17:03.958 }, 00:17:03.958 "claimed": true, 00:17:03.958 "claim_type": "exclusive_write", 00:17:03.958 "zoned": false, 00:17:03.958 "supported_io_types": { 00:17:03.958 "read": true, 00:17:03.958 "write": true, 00:17:03.958 "unmap": true, 00:17:03.958 "write_zeroes": true, 00:17:03.958 "flush": true, 00:17:03.958 "reset": true, 00:17:03.958 "compare": false, 00:17:03.958 "compare_and_write": false, 00:17:03.958 "abort": true, 00:17:03.958 "nvme_admin": false, 00:17:03.958 "nvme_io": false 00:17:03.958 }, 00:17:03.958 "memory_domains": [ 00:17:03.958 { 00:17:03.958 "dma_device_id": "system", 00:17:03.958 "dma_device_type": 1 00:17:03.958 }, 00:17:03.958 { 00:17:03.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.958 "dma_device_type": 2 00:17:03.958 } 00:17:03.958 ], 00:17:03.958 "driver_specific": {} 00:17:03.958 }' 00:17:03.958 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:03.958 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:03.958 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:03.958 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:03.958 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:04.216 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.216 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:04.216 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:04.216 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:04.216 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:04.216 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:04.475 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:04.475 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:04.475 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:04.475 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:04.734 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:04.734 "name": "BaseBdev3", 00:17:04.734 "aliases": [ 00:17:04.734 "7926343d-1654-48b9-bd66-165fc17d4da3" 00:17:04.734 ], 00:17:04.734 "product_name": "Malloc disk", 00:17:04.734 "block_size": 512, 00:17:04.734 "num_blocks": 65536, 00:17:04.734 "uuid": "7926343d-1654-48b9-bd66-165fc17d4da3", 00:17:04.734 "assigned_rate_limits": { 00:17:04.734 "rw_ios_per_sec": 0, 00:17:04.734 "rw_mbytes_per_sec": 0, 00:17:04.734 "r_mbytes_per_sec": 0, 00:17:04.734 "w_mbytes_per_sec": 0 00:17:04.734 }, 00:17:04.734 "claimed": true, 00:17:04.734 "claim_type": "exclusive_write", 00:17:04.734 "zoned": false, 00:17:04.734 "supported_io_types": { 00:17:04.734 "read": true, 00:17:04.734 "write": true, 00:17:04.734 "unmap": true, 00:17:04.734 "write_zeroes": true, 00:17:04.734 "flush": true, 00:17:04.734 "reset": true, 00:17:04.734 "compare": false, 00:17:04.734 "compare_and_write": false, 00:17:04.734 "abort": true, 00:17:04.734 "nvme_admin": false, 00:17:04.734 "nvme_io": false 00:17:04.734 }, 00:17:04.734 "memory_domains": [ 00:17:04.734 { 00:17:04.734 "dma_device_id": "system", 00:17:04.734 "dma_device_type": 1 00:17:04.734 }, 00:17:04.734 { 00:17:04.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.734 "dma_device_type": 2 00:17:04.734 } 00:17:04.734 ], 00:17:04.734 "driver_specific": {} 00:17:04.734 }' 00:17:04.734 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:04.734 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:04.734 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:04.734 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:04.734 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:04.734 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.734 23:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:04.994 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:04.994 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:04.994 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:04.994 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:04.994 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:04.994 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:05.254 [2024-05-14 23:32:28.359391] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.254 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.515 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.515 "name": "Existed_Raid", 00:17:05.515 "uuid": "24d1d413-0de4-43be-ace0-0567f698bf32", 00:17:05.515 "strip_size_kb": 0, 00:17:05.515 "state": "online", 00:17:05.515 "raid_level": "raid1", 00:17:05.515 "superblock": true, 00:17:05.515 "num_base_bdevs": 3, 00:17:05.515 "num_base_bdevs_discovered": 2, 00:17:05.515 "num_base_bdevs_operational": 2, 00:17:05.515 "base_bdevs_list": [ 00:17:05.515 { 00:17:05.515 "name": null, 00:17:05.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.515 "is_configured": false, 00:17:05.515 "data_offset": 2048, 00:17:05.515 "data_size": 63488 00:17:05.515 }, 00:17:05.515 { 00:17:05.515 "name": "BaseBdev2", 00:17:05.515 "uuid": "55d4cbd6-2dad-4031-8227-8798d2f26085", 00:17:05.515 "is_configured": true, 00:17:05.515 "data_offset": 2048, 00:17:05.515 "data_size": 63488 00:17:05.515 }, 00:17:05.515 { 00:17:05.515 "name": "BaseBdev3", 00:17:05.515 "uuid": "7926343d-1654-48b9-bd66-165fc17d4da3", 00:17:05.515 "is_configured": true, 00:17:05.515 "data_offset": 2048, 00:17:05.515 "data_size": 63488 00:17:05.515 } 00:17:05.515 ] 00:17:05.515 }' 00:17:05.515 23:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.515 23:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.093 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:06.093 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:06.093 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.093 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:06.360 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:06.360 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:06.360 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:06.629 [2024-05-14 23:32:29.813912] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:06.900 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:06.900 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:06.900 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:17:06.900 23:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.172 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:17:07.172 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:07.172 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:07.446 [2024-05-14 23:32:30.499693] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.446 [2024-05-14 23:32:30.499784] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.446 [2024-05-14 23:32:30.584392] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.446 [2024-05-14 23:32:30.584512] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.446 [2024-05-14 23:32:30.584529] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:17:07.446 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:07.446 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:07.446 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.446 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:17:07.708 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:17:07.708 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:17:07.708 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:17:07.708 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:17:07.708 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:07.708 23:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:07.966 BaseBdev2 00:17:07.966 23:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:17:07.966 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:07.966 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:07.966 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:07.966 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:07.966 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:07.966 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.224 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:08.224 [ 00:17:08.224 { 00:17:08.224 "name": "BaseBdev2", 00:17:08.224 "aliases": [ 00:17:08.224 "056e24dd-665f-4eb3-a834-6db4d556ad84" 00:17:08.224 ], 00:17:08.224 "product_name": "Malloc disk", 00:17:08.224 "block_size": 512, 00:17:08.224 "num_blocks": 65536, 00:17:08.224 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:08.224 "assigned_rate_limits": { 00:17:08.224 "rw_ios_per_sec": 0, 00:17:08.224 "rw_mbytes_per_sec": 0, 00:17:08.224 "r_mbytes_per_sec": 0, 00:17:08.224 "w_mbytes_per_sec": 0 00:17:08.224 }, 00:17:08.224 "claimed": false, 00:17:08.224 "zoned": false, 00:17:08.224 "supported_io_types": { 00:17:08.224 "read": true, 00:17:08.224 "write": true, 00:17:08.224 "unmap": true, 00:17:08.224 "write_zeroes": true, 00:17:08.224 "flush": true, 00:17:08.224 "reset": true, 00:17:08.224 "compare": false, 00:17:08.224 "compare_and_write": false, 00:17:08.224 "abort": true, 00:17:08.224 "nvme_admin": false, 00:17:08.224 "nvme_io": false 00:17:08.224 }, 00:17:08.224 "memory_domains": [ 00:17:08.224 { 00:17:08.224 "dma_device_id": "system", 00:17:08.224 "dma_device_type": 1 00:17:08.224 }, 00:17:08.224 { 00:17:08.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.224 "dma_device_type": 2 00:17:08.224 } 00:17:08.224 ], 00:17:08.224 "driver_specific": {} 00:17:08.224 } 00:17:08.224 ] 00:17:08.224 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:08.224 23:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:08.224 23:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:08.224 23:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:08.792 BaseBdev3 00:17:08.792 23:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:17:08.792 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:08.792 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:08.792 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:08.792 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:08.792 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:08.792 23:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.051 23:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:09.051 [ 00:17:09.051 { 00:17:09.051 "name": "BaseBdev3", 00:17:09.051 "aliases": [ 00:17:09.051 "7aa81de1-083e-4753-8bbd-27e415861fb8" 00:17:09.051 ], 00:17:09.051 "product_name": "Malloc disk", 00:17:09.051 "block_size": 512, 00:17:09.051 "num_blocks": 65536, 00:17:09.051 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:09.051 "assigned_rate_limits": { 00:17:09.051 "rw_ios_per_sec": 0, 00:17:09.051 "rw_mbytes_per_sec": 0, 00:17:09.051 "r_mbytes_per_sec": 0, 00:17:09.051 "w_mbytes_per_sec": 0 00:17:09.051 }, 00:17:09.051 "claimed": false, 00:17:09.051 "zoned": false, 00:17:09.051 "supported_io_types": { 00:17:09.051 "read": true, 00:17:09.051 "write": true, 00:17:09.051 "unmap": true, 00:17:09.051 "write_zeroes": true, 00:17:09.051 "flush": true, 00:17:09.051 "reset": true, 00:17:09.051 "compare": false, 00:17:09.051 "compare_and_write": false, 00:17:09.051 "abort": true, 00:17:09.051 "nvme_admin": false, 00:17:09.051 "nvme_io": false 00:17:09.051 }, 00:17:09.051 "memory_domains": [ 00:17:09.051 { 00:17:09.051 "dma_device_id": "system", 00:17:09.051 "dma_device_type": 1 00:17:09.051 }, 00:17:09.051 { 00:17:09.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.051 "dma_device_type": 2 00:17:09.051 } 00:17:09.051 ], 00:17:09.051 "driver_specific": {} 00:17:09.051 } 00:17:09.051 ] 00:17:09.051 23:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:09.051 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:17:09.051 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:17:09.051 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:09.310 [2024-05-14 23:32:32.538956] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.310 [2024-05-14 23:32:32.539064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.310 [2024-05-14 23:32:32.539108] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.310 [2024-05-14 23:32:32.540668] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.310 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.569 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.569 "name": "Existed_Raid", 00:17:09.569 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:09.569 "strip_size_kb": 0, 00:17:09.569 "state": "configuring", 00:17:09.569 "raid_level": "raid1", 00:17:09.569 "superblock": true, 00:17:09.569 "num_base_bdevs": 3, 00:17:09.569 "num_base_bdevs_discovered": 2, 00:17:09.569 "num_base_bdevs_operational": 3, 00:17:09.569 "base_bdevs_list": [ 00:17:09.569 { 00:17:09.569 "name": "BaseBdev1", 00:17:09.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.569 "is_configured": false, 00:17:09.569 "data_offset": 0, 00:17:09.569 "data_size": 0 00:17:09.569 }, 00:17:09.569 { 00:17:09.569 "name": "BaseBdev2", 00:17:09.569 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:09.569 "is_configured": true, 00:17:09.569 "data_offset": 2048, 00:17:09.569 "data_size": 63488 00:17:09.569 }, 00:17:09.569 { 00:17:09.569 "name": "BaseBdev3", 00:17:09.569 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:09.569 "is_configured": true, 00:17:09.569 "data_offset": 2048, 00:17:09.569 "data_size": 63488 00:17:09.569 } 00:17:09.569 ] 00:17:09.569 }' 00:17:09.569 23:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.569 23:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.505 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:10.505 [2024-05-14 23:32:33.747048] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:10.505 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:10.505 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.505 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.505 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:10.505 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:10.505 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:10.506 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.506 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.506 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.506 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.506 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.506 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.767 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.767 "name": "Existed_Raid", 00:17:10.767 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:10.767 "strip_size_kb": 0, 00:17:10.767 "state": "configuring", 00:17:10.767 "raid_level": "raid1", 00:17:10.767 "superblock": true, 00:17:10.767 "num_base_bdevs": 3, 00:17:10.767 "num_base_bdevs_discovered": 1, 00:17:10.767 "num_base_bdevs_operational": 3, 00:17:10.767 "base_bdevs_list": [ 00:17:10.767 { 00:17:10.767 "name": "BaseBdev1", 00:17:10.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.767 "is_configured": false, 00:17:10.767 "data_offset": 0, 00:17:10.767 "data_size": 0 00:17:10.767 }, 00:17:10.767 { 00:17:10.767 "name": null, 00:17:10.767 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:10.767 "is_configured": false, 00:17:10.767 "data_offset": 2048, 00:17:10.767 "data_size": 63488 00:17:10.767 }, 00:17:10.767 { 00:17:10.767 "name": "BaseBdev3", 00:17:10.767 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:10.767 "is_configured": true, 00:17:10.767 "data_offset": 2048, 00:17:10.767 "data_size": 63488 00:17:10.767 } 00:17:10.767 ] 00:17:10.767 }' 00:17:10.767 23:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.767 23:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.722 23:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.722 23:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:11.722 23:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:17:11.722 23:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.989 BaseBdev1 00:17:11.989 [2024-05-14 23:32:35.163260] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.989 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:17:11.989 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:11.989 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:11.989 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:11.989 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:11.989 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:11.989 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.257 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:12.518 [ 00:17:12.518 { 00:17:12.518 "name": "BaseBdev1", 00:17:12.518 "aliases": [ 00:17:12.518 "8b2adb21-bfc7-412e-902a-98ca265236f4" 00:17:12.518 ], 00:17:12.518 "product_name": "Malloc disk", 00:17:12.518 "block_size": 512, 00:17:12.518 "num_blocks": 65536, 00:17:12.518 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:12.518 "assigned_rate_limits": { 00:17:12.518 "rw_ios_per_sec": 0, 00:17:12.518 "rw_mbytes_per_sec": 0, 00:17:12.518 "r_mbytes_per_sec": 0, 00:17:12.518 "w_mbytes_per_sec": 0 00:17:12.518 }, 00:17:12.518 "claimed": true, 00:17:12.518 "claim_type": "exclusive_write", 00:17:12.518 "zoned": false, 00:17:12.518 "supported_io_types": { 00:17:12.518 "read": true, 00:17:12.518 "write": true, 00:17:12.518 "unmap": true, 00:17:12.518 "write_zeroes": true, 00:17:12.518 "flush": true, 00:17:12.518 "reset": true, 00:17:12.518 "compare": false, 00:17:12.518 "compare_and_write": false, 00:17:12.518 "abort": true, 00:17:12.518 "nvme_admin": false, 00:17:12.518 "nvme_io": false 00:17:12.518 }, 00:17:12.518 "memory_domains": [ 00:17:12.518 { 00:17:12.518 "dma_device_id": "system", 00:17:12.518 "dma_device_type": 1 00:17:12.518 }, 00:17:12.518 { 00:17:12.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.518 "dma_device_type": 2 00:17:12.518 } 00:17:12.518 ], 00:17:12.518 "driver_specific": {} 00:17:12.518 } 00:17:12.518 ] 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.518 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.778 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.778 "name": "Existed_Raid", 00:17:12.778 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:12.778 "strip_size_kb": 0, 00:17:12.778 "state": "configuring", 00:17:12.778 "raid_level": "raid1", 00:17:12.778 "superblock": true, 00:17:12.778 "num_base_bdevs": 3, 00:17:12.778 "num_base_bdevs_discovered": 2, 00:17:12.778 "num_base_bdevs_operational": 3, 00:17:12.778 "base_bdevs_list": [ 00:17:12.778 { 00:17:12.778 "name": "BaseBdev1", 00:17:12.778 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:12.778 "is_configured": true, 00:17:12.778 "data_offset": 2048, 00:17:12.778 "data_size": 63488 00:17:12.778 }, 00:17:12.778 { 00:17:12.778 "name": null, 00:17:12.778 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:12.778 "is_configured": false, 00:17:12.778 "data_offset": 2048, 00:17:12.778 "data_size": 63488 00:17:12.778 }, 00:17:12.778 { 00:17:12.778 "name": "BaseBdev3", 00:17:12.778 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:12.778 "is_configured": true, 00:17:12.778 "data_offset": 2048, 00:17:12.778 "data_size": 63488 00:17:12.778 } 00:17:12.778 ] 00:17:12.778 }' 00:17:12.778 23:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.778 23:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.345 23:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.345 23:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:13.604 23:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:13.604 23:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:13.863 [2024-05-14 23:32:37.043593] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.863 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.122 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.122 "name": "Existed_Raid", 00:17:14.122 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:14.122 "strip_size_kb": 0, 00:17:14.122 "state": "configuring", 00:17:14.122 "raid_level": "raid1", 00:17:14.122 "superblock": true, 00:17:14.122 "num_base_bdevs": 3, 00:17:14.122 "num_base_bdevs_discovered": 1, 00:17:14.122 "num_base_bdevs_operational": 3, 00:17:14.122 "base_bdevs_list": [ 00:17:14.122 { 00:17:14.122 "name": "BaseBdev1", 00:17:14.122 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:14.122 "is_configured": true, 00:17:14.122 "data_offset": 2048, 00:17:14.122 "data_size": 63488 00:17:14.122 }, 00:17:14.122 { 00:17:14.122 "name": null, 00:17:14.122 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:14.122 "is_configured": false, 00:17:14.122 "data_offset": 2048, 00:17:14.122 "data_size": 63488 00:17:14.122 }, 00:17:14.122 { 00:17:14.122 "name": null, 00:17:14.122 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:14.122 "is_configured": false, 00:17:14.122 "data_offset": 2048, 00:17:14.122 "data_size": 63488 00:17:14.122 } 00:17:14.122 ] 00:17:14.122 }' 00:17:14.122 23:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.123 23:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.058 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:15.058 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:17:15.058 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:15.318 [2024-05-14 23:32:38.403746] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.318 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.577 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.577 "name": "Existed_Raid", 00:17:15.577 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:15.577 "strip_size_kb": 0, 00:17:15.577 "state": "configuring", 00:17:15.577 "raid_level": "raid1", 00:17:15.577 "superblock": true, 00:17:15.577 "num_base_bdevs": 3, 00:17:15.577 "num_base_bdevs_discovered": 2, 00:17:15.577 "num_base_bdevs_operational": 3, 00:17:15.577 "base_bdevs_list": [ 00:17:15.577 { 00:17:15.577 "name": "BaseBdev1", 00:17:15.577 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:15.577 "is_configured": true, 00:17:15.577 "data_offset": 2048, 00:17:15.577 "data_size": 63488 00:17:15.577 }, 00:17:15.577 { 00:17:15.577 "name": null, 00:17:15.577 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:15.577 "is_configured": false, 00:17:15.577 "data_offset": 2048, 00:17:15.577 "data_size": 63488 00:17:15.577 }, 00:17:15.577 { 00:17:15.577 "name": "BaseBdev3", 00:17:15.577 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:15.577 "is_configured": true, 00:17:15.577 "data_offset": 2048, 00:17:15.577 "data_size": 63488 00:17:15.577 } 00:17:15.577 ] 00:17:15.577 }' 00:17:15.577 23:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.577 23:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.144 23:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.144 23:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:16.403 23:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:17:16.403 23:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:16.661 [2024-05-14 23:32:39.911981] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.920 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.179 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.179 "name": "Existed_Raid", 00:17:17.179 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:17.179 "strip_size_kb": 0, 00:17:17.179 "state": "configuring", 00:17:17.179 "raid_level": "raid1", 00:17:17.179 "superblock": true, 00:17:17.179 "num_base_bdevs": 3, 00:17:17.179 "num_base_bdevs_discovered": 1, 00:17:17.179 "num_base_bdevs_operational": 3, 00:17:17.179 "base_bdevs_list": [ 00:17:17.179 { 00:17:17.179 "name": null, 00:17:17.179 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:17.179 "is_configured": false, 00:17:17.179 "data_offset": 2048, 00:17:17.179 "data_size": 63488 00:17:17.179 }, 00:17:17.179 { 00:17:17.179 "name": null, 00:17:17.179 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:17.179 "is_configured": false, 00:17:17.179 "data_offset": 2048, 00:17:17.179 "data_size": 63488 00:17:17.179 }, 00:17:17.179 { 00:17:17.179 "name": "BaseBdev3", 00:17:17.179 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:17.179 "is_configured": true, 00:17:17.179 "data_offset": 2048, 00:17:17.179 "data_size": 63488 00:17:17.179 } 00:17:17.179 ] 00:17:17.179 }' 00:17:17.179 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.179 23:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.748 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.748 23:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:18.007 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:17:18.007 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:18.266 [2024-05-14 23:32:41.308436] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.266 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:18.266 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.266 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:18.266 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:18.266 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:18.266 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:18.266 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.266 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.267 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.267 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.267 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.267 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.267 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.267 "name": "Existed_Raid", 00:17:18.267 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:18.267 "strip_size_kb": 0, 00:17:18.267 "state": "configuring", 00:17:18.267 "raid_level": "raid1", 00:17:18.267 "superblock": true, 00:17:18.267 "num_base_bdevs": 3, 00:17:18.267 "num_base_bdevs_discovered": 2, 00:17:18.267 "num_base_bdevs_operational": 3, 00:17:18.267 "base_bdevs_list": [ 00:17:18.267 { 00:17:18.267 "name": null, 00:17:18.267 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:18.267 "is_configured": false, 00:17:18.267 "data_offset": 2048, 00:17:18.267 "data_size": 63488 00:17:18.267 }, 00:17:18.267 { 00:17:18.267 "name": "BaseBdev2", 00:17:18.267 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:18.267 "is_configured": true, 00:17:18.267 "data_offset": 2048, 00:17:18.267 "data_size": 63488 00:17:18.267 }, 00:17:18.267 { 00:17:18.267 "name": "BaseBdev3", 00:17:18.267 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:18.267 "is_configured": true, 00:17:18.267 "data_offset": 2048, 00:17:18.267 "data_size": 63488 00:17:18.267 } 00:17:18.267 ] 00:17:18.267 }' 00:17:18.267 23:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.267 23:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.204 23:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.204 23:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:19.517 23:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:17:19.517 23:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.517 23:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:19.517 23:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8b2adb21-bfc7-412e-902a-98ca265236f4 00:17:19.791 [2024-05-14 23:32:43.003727] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:19.791 [2024-05-14 23:32:43.003917] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:17:19.791 [2024-05-14 23:32:43.003933] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:19.791 [2024-05-14 23:32:43.004027] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:19.791 NewBaseBdev 00:17:19.791 [2024-05-14 23:32:43.004563] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:17:19.791 [2024-05-14 23:32:43.004585] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:17:19.791 [2024-05-14 23:32:43.004686] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.791 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:17:19.791 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:17:19.791 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:19.791 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:19.792 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:19.792 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:19.792 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.049 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:20.307 [ 00:17:20.307 { 00:17:20.307 "name": "NewBaseBdev", 00:17:20.307 "aliases": [ 00:17:20.307 "8b2adb21-bfc7-412e-902a-98ca265236f4" 00:17:20.307 ], 00:17:20.307 "product_name": "Malloc disk", 00:17:20.307 "block_size": 512, 00:17:20.307 "num_blocks": 65536, 00:17:20.307 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:20.307 "assigned_rate_limits": { 00:17:20.307 "rw_ios_per_sec": 0, 00:17:20.307 "rw_mbytes_per_sec": 0, 00:17:20.307 "r_mbytes_per_sec": 0, 00:17:20.307 "w_mbytes_per_sec": 0 00:17:20.307 }, 00:17:20.307 "claimed": true, 00:17:20.307 "claim_type": "exclusive_write", 00:17:20.308 "zoned": false, 00:17:20.308 "supported_io_types": { 00:17:20.308 "read": true, 00:17:20.308 "write": true, 00:17:20.308 "unmap": true, 00:17:20.308 "write_zeroes": true, 00:17:20.308 "flush": true, 00:17:20.308 "reset": true, 00:17:20.308 "compare": false, 00:17:20.308 "compare_and_write": false, 00:17:20.308 "abort": true, 00:17:20.308 "nvme_admin": false, 00:17:20.308 "nvme_io": false 00:17:20.308 }, 00:17:20.308 "memory_domains": [ 00:17:20.308 { 00:17:20.308 "dma_device_id": "system", 00:17:20.308 "dma_device_type": 1 00:17:20.308 }, 00:17:20.308 { 00:17:20.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.308 "dma_device_type": 2 00:17:20.308 } 00:17:20.308 ], 00:17:20.308 "driver_specific": {} 00:17:20.308 } 00:17:20.308 ] 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.308 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.565 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.565 "name": "Existed_Raid", 00:17:20.565 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:20.565 "strip_size_kb": 0, 00:17:20.565 "state": "online", 00:17:20.565 "raid_level": "raid1", 00:17:20.565 "superblock": true, 00:17:20.565 "num_base_bdevs": 3, 00:17:20.565 "num_base_bdevs_discovered": 3, 00:17:20.565 "num_base_bdevs_operational": 3, 00:17:20.565 "base_bdevs_list": [ 00:17:20.565 { 00:17:20.565 "name": "NewBaseBdev", 00:17:20.565 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:20.565 "is_configured": true, 00:17:20.565 "data_offset": 2048, 00:17:20.565 "data_size": 63488 00:17:20.565 }, 00:17:20.565 { 00:17:20.565 "name": "BaseBdev2", 00:17:20.565 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:20.565 "is_configured": true, 00:17:20.565 "data_offset": 2048, 00:17:20.565 "data_size": 63488 00:17:20.565 }, 00:17:20.565 { 00:17:20.565 "name": "BaseBdev3", 00:17:20.565 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:20.565 "is_configured": true, 00:17:20.565 "data_offset": 2048, 00:17:20.565 "data_size": 63488 00:17:20.565 } 00:17:20.565 ] 00:17:20.565 }' 00:17:20.565 23:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.565 23:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:21.496 [2024-05-14 23:32:44.724194] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:21.496 "name": "Existed_Raid", 00:17:21.496 "aliases": [ 00:17:21.496 "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c" 00:17:21.496 ], 00:17:21.496 "product_name": "Raid Volume", 00:17:21.496 "block_size": 512, 00:17:21.496 "num_blocks": 63488, 00:17:21.496 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:21.496 "assigned_rate_limits": { 00:17:21.496 "rw_ios_per_sec": 0, 00:17:21.496 "rw_mbytes_per_sec": 0, 00:17:21.496 "r_mbytes_per_sec": 0, 00:17:21.496 "w_mbytes_per_sec": 0 00:17:21.496 }, 00:17:21.496 "claimed": false, 00:17:21.496 "zoned": false, 00:17:21.496 "supported_io_types": { 00:17:21.496 "read": true, 00:17:21.496 "write": true, 00:17:21.496 "unmap": false, 00:17:21.496 "write_zeroes": true, 00:17:21.496 "flush": false, 00:17:21.496 "reset": true, 00:17:21.496 "compare": false, 00:17:21.496 "compare_and_write": false, 00:17:21.496 "abort": false, 00:17:21.496 "nvme_admin": false, 00:17:21.496 "nvme_io": false 00:17:21.496 }, 00:17:21.496 "memory_domains": [ 00:17:21.496 { 00:17:21.496 "dma_device_id": "system", 00:17:21.496 "dma_device_type": 1 00:17:21.496 }, 00:17:21.496 { 00:17:21.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.496 "dma_device_type": 2 00:17:21.496 }, 00:17:21.496 { 00:17:21.496 "dma_device_id": "system", 00:17:21.496 "dma_device_type": 1 00:17:21.496 }, 00:17:21.496 { 00:17:21.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.496 "dma_device_type": 2 00:17:21.496 }, 00:17:21.496 { 00:17:21.496 "dma_device_id": "system", 00:17:21.496 "dma_device_type": 1 00:17:21.496 }, 00:17:21.496 { 00:17:21.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.496 "dma_device_type": 2 00:17:21.496 } 00:17:21.496 ], 00:17:21.496 "driver_specific": { 00:17:21.496 "raid": { 00:17:21.496 "uuid": "2ffd13dd-ae1c-45ab-a684-693cf7bfdc1c", 00:17:21.496 "strip_size_kb": 0, 00:17:21.496 "state": "online", 00:17:21.496 "raid_level": "raid1", 00:17:21.496 "superblock": true, 00:17:21.496 "num_base_bdevs": 3, 00:17:21.496 "num_base_bdevs_discovered": 3, 00:17:21.496 "num_base_bdevs_operational": 3, 00:17:21.496 "base_bdevs_list": [ 00:17:21.496 { 00:17:21.496 "name": "NewBaseBdev", 00:17:21.496 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:21.496 "is_configured": true, 00:17:21.496 "data_offset": 2048, 00:17:21.496 "data_size": 63488 00:17:21.496 }, 00:17:21.496 { 00:17:21.496 "name": "BaseBdev2", 00:17:21.496 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:21.496 "is_configured": true, 00:17:21.496 "data_offset": 2048, 00:17:21.496 "data_size": 63488 00:17:21.496 }, 00:17:21.496 { 00:17:21.496 "name": "BaseBdev3", 00:17:21.496 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:21.496 "is_configured": true, 00:17:21.496 "data_offset": 2048, 00:17:21.496 "data_size": 63488 00:17:21.496 } 00:17:21.496 ] 00:17:21.496 } 00:17:21.496 } 00:17:21.496 }' 00:17:21.496 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.755 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:17:21.755 BaseBdev2 00:17:21.755 BaseBdev3' 00:17:21.755 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:21.755 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:21.755 23:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:21.755 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:21.755 "name": "NewBaseBdev", 00:17:21.755 "aliases": [ 00:17:21.755 "8b2adb21-bfc7-412e-902a-98ca265236f4" 00:17:21.755 ], 00:17:21.755 "product_name": "Malloc disk", 00:17:21.755 "block_size": 512, 00:17:21.755 "num_blocks": 65536, 00:17:21.755 "uuid": "8b2adb21-bfc7-412e-902a-98ca265236f4", 00:17:21.755 "assigned_rate_limits": { 00:17:21.755 "rw_ios_per_sec": 0, 00:17:21.755 "rw_mbytes_per_sec": 0, 00:17:21.755 "r_mbytes_per_sec": 0, 00:17:21.755 "w_mbytes_per_sec": 0 00:17:21.755 }, 00:17:21.755 "claimed": true, 00:17:21.755 "claim_type": "exclusive_write", 00:17:21.755 "zoned": false, 00:17:21.755 "supported_io_types": { 00:17:21.755 "read": true, 00:17:21.755 "write": true, 00:17:21.755 "unmap": true, 00:17:21.755 "write_zeroes": true, 00:17:21.755 "flush": true, 00:17:21.755 "reset": true, 00:17:21.755 "compare": false, 00:17:21.755 "compare_and_write": false, 00:17:21.755 "abort": true, 00:17:21.755 "nvme_admin": false, 00:17:21.755 "nvme_io": false 00:17:21.755 }, 00:17:21.755 "memory_domains": [ 00:17:21.755 { 00:17:21.755 "dma_device_id": "system", 00:17:21.755 "dma_device_type": 1 00:17:21.755 }, 00:17:21.755 { 00:17:21.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.755 "dma_device_type": 2 00:17:21.755 } 00:17:21.755 ], 00:17:21.755 "driver_specific": {} 00:17:21.755 }' 00:17:21.755 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:22.014 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:22.014 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:22.014 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:22.014 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:22.014 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.014 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:22.014 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:22.272 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.272 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:22.272 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:22.272 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:22.272 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:22.272 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:22.272 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:22.531 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:22.531 "name": "BaseBdev2", 00:17:22.531 "aliases": [ 00:17:22.531 "056e24dd-665f-4eb3-a834-6db4d556ad84" 00:17:22.531 ], 00:17:22.531 "product_name": "Malloc disk", 00:17:22.531 "block_size": 512, 00:17:22.531 "num_blocks": 65536, 00:17:22.531 "uuid": "056e24dd-665f-4eb3-a834-6db4d556ad84", 00:17:22.531 "assigned_rate_limits": { 00:17:22.531 "rw_ios_per_sec": 0, 00:17:22.531 "rw_mbytes_per_sec": 0, 00:17:22.531 "r_mbytes_per_sec": 0, 00:17:22.531 "w_mbytes_per_sec": 0 00:17:22.531 }, 00:17:22.531 "claimed": true, 00:17:22.531 "claim_type": "exclusive_write", 00:17:22.531 "zoned": false, 00:17:22.531 "supported_io_types": { 00:17:22.531 "read": true, 00:17:22.531 "write": true, 00:17:22.531 "unmap": true, 00:17:22.531 "write_zeroes": true, 00:17:22.531 "flush": true, 00:17:22.531 "reset": true, 00:17:22.531 "compare": false, 00:17:22.531 "compare_and_write": false, 00:17:22.531 "abort": true, 00:17:22.531 "nvme_admin": false, 00:17:22.531 "nvme_io": false 00:17:22.531 }, 00:17:22.531 "memory_domains": [ 00:17:22.531 { 00:17:22.531 "dma_device_id": "system", 00:17:22.531 "dma_device_type": 1 00:17:22.531 }, 00:17:22.531 { 00:17:22.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.531 "dma_device_type": 2 00:17:22.531 } 00:17:22.531 ], 00:17:22.531 "driver_specific": {} 00:17:22.531 }' 00:17:22.531 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:22.531 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:22.531 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:22.531 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:22.531 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:22.790 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.790 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:22.790 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:22.790 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.790 23:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:22.790 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:23.067 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:23.067 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:23.067 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:23.067 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:23.067 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:23.067 "name": "BaseBdev3", 00:17:23.067 "aliases": [ 00:17:23.067 "7aa81de1-083e-4753-8bbd-27e415861fb8" 00:17:23.067 ], 00:17:23.067 "product_name": "Malloc disk", 00:17:23.067 "block_size": 512, 00:17:23.068 "num_blocks": 65536, 00:17:23.068 "uuid": "7aa81de1-083e-4753-8bbd-27e415861fb8", 00:17:23.068 "assigned_rate_limits": { 00:17:23.068 "rw_ios_per_sec": 0, 00:17:23.068 "rw_mbytes_per_sec": 0, 00:17:23.068 "r_mbytes_per_sec": 0, 00:17:23.068 "w_mbytes_per_sec": 0 00:17:23.068 }, 00:17:23.068 "claimed": true, 00:17:23.068 "claim_type": "exclusive_write", 00:17:23.068 "zoned": false, 00:17:23.068 "supported_io_types": { 00:17:23.068 "read": true, 00:17:23.068 "write": true, 00:17:23.068 "unmap": true, 00:17:23.068 "write_zeroes": true, 00:17:23.068 "flush": true, 00:17:23.068 "reset": true, 00:17:23.068 "compare": false, 00:17:23.068 "compare_and_write": false, 00:17:23.068 "abort": true, 00:17:23.068 "nvme_admin": false, 00:17:23.068 "nvme_io": false 00:17:23.068 }, 00:17:23.068 "memory_domains": [ 00:17:23.068 { 00:17:23.068 "dma_device_id": "system", 00:17:23.068 "dma_device_type": 1 00:17:23.068 }, 00:17:23.068 { 00:17:23.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.068 "dma_device_type": 2 00:17:23.068 } 00:17:23.068 ], 00:17:23.068 "driver_specific": {} 00:17:23.068 }' 00:17:23.068 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:23.345 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:23.345 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:23.345 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:23.345 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:23.345 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.345 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:23.345 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:23.603 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.603 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:23.603 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:23.603 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:23.603 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:23.861 [2024-05-14 23:32:46.928334] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.861 [2024-05-14 23:32:46.928380] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.861 [2024-05-14 23:32:46.928451] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.861 [2024-05-14 23:32:46.928639] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.861 [2024-05-14 23:32:46.928652] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 62390 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 62390 ']' 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 62390 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62390 00:17:23.861 killing process with pid 62390 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62390' 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 62390 00:17:23.861 23:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 62390 00:17:23.861 [2024-05-14 23:32:46.966816] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.120 [2024-05-14 23:32:47.324231] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.496 ************************************ 00:17:25.496 END TEST raid_state_function_test_sb 00:17:25.496 ************************************ 00:17:25.496 23:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:17:25.496 00:17:25.496 real 0m31.476s 00:17:25.496 user 0m59.320s 00:17:25.496 sys 0m3.069s 00:17:25.496 23:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:25.496 23:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.496 23:32:48 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:25.496 23:32:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:25.496 23:32:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:25.496 23:32:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.496 ************************************ 00:17:25.496 START TEST raid_superblock_test 00:17:25.496 ************************************ 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:25.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63391 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63391 /var/tmp/spdk-raid.sock 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 63391 ']' 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:25.496 23:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.496 [2024-05-14 23:32:48.748995] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:17:25.496 [2024-05-14 23:32:48.749639] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63391 ] 00:17:25.755 [2024-05-14 23:32:48.920034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.013 [2024-05-14 23:32:49.144994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.270 [2024-05-14 23:32:49.342920] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:26.528 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:26.529 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:26.529 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:26.792 malloc1 00:17:26.792 23:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:26.792 [2024-05-14 23:32:50.027416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:26.792 [2024-05-14 23:32:50.027511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.792 [2024-05-14 23:32:50.027574] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:17:26.792 [2024-05-14 23:32:50.027618] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.792 [2024-05-14 23:32:50.029581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.792 [2024-05-14 23:32:50.029621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:26.792 pt1 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:26.792 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:27.063 malloc2 00:17:27.063 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:27.321 [2024-05-14 23:32:50.453562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:27.321 [2024-05-14 23:32:50.453659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.321 [2024-05-14 23:32:50.453707] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:17:27.321 [2024-05-14 23:32:50.453747] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.321 [2024-05-14 23:32:50.455665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.321 [2024-05-14 23:32:50.455711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:27.321 pt2 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:27.321 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:27.580 malloc3 00:17:27.580 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:27.839 [2024-05-14 23:32:50.907558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:27.839 [2024-05-14 23:32:50.907662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.839 [2024-05-14 23:32:50.907711] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:17:27.839 [2024-05-14 23:32:50.907763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.839 [2024-05-14 23:32:50.909627] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.839 [2024-05-14 23:32:50.909675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:27.839 pt3 00:17:27.839 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:27.839 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:27.839 23:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:27.840 [2024-05-14 23:32:51.099659] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.840 [2024-05-14 23:32:51.101260] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:27.840 [2024-05-14 23:32:51.101315] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:27.840 [2024-05-14 23:32:51.101448] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:17:27.840 [2024-05-14 23:32:51.101463] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:27.840 [2024-05-14 23:32:51.101588] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:27.840 [2024-05-14 23:32:51.101867] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:17:27.840 [2024-05-14 23:32:51.101884] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:17:27.840 [2024-05-14 23:32:51.102003] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.840 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.098 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.098 "name": "raid_bdev1", 00:17:28.098 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:28.098 "strip_size_kb": 0, 00:17:28.098 "state": "online", 00:17:28.098 "raid_level": "raid1", 00:17:28.098 "superblock": true, 00:17:28.098 "num_base_bdevs": 3, 00:17:28.098 "num_base_bdevs_discovered": 3, 00:17:28.098 "num_base_bdevs_operational": 3, 00:17:28.098 "base_bdevs_list": [ 00:17:28.098 { 00:17:28.098 "name": "pt1", 00:17:28.098 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:28.098 "is_configured": true, 00:17:28.098 "data_offset": 2048, 00:17:28.098 "data_size": 63488 00:17:28.098 }, 00:17:28.098 { 00:17:28.098 "name": "pt2", 00:17:28.098 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:28.098 "is_configured": true, 00:17:28.098 "data_offset": 2048, 00:17:28.098 "data_size": 63488 00:17:28.098 }, 00:17:28.098 { 00:17:28.098 "name": "pt3", 00:17:28.098 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:28.098 "is_configured": true, 00:17:28.098 "data_offset": 2048, 00:17:28.098 "data_size": 63488 00:17:28.098 } 00:17:28.098 ] 00:17:28.099 }' 00:17:28.099 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.099 23:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.033 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:29.033 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:29.033 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:29.033 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:29.033 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:29.033 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:17:29.033 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:29.033 23:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:29.033 [2024-05-14 23:32:52.235952] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.033 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:29.033 "name": "raid_bdev1", 00:17:29.033 "aliases": [ 00:17:29.033 "0b4dd769-26fc-40ce-b10c-f0638baacb0a" 00:17:29.033 ], 00:17:29.033 "product_name": "Raid Volume", 00:17:29.033 "block_size": 512, 00:17:29.033 "num_blocks": 63488, 00:17:29.033 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:29.033 "assigned_rate_limits": { 00:17:29.034 "rw_ios_per_sec": 0, 00:17:29.034 "rw_mbytes_per_sec": 0, 00:17:29.034 "r_mbytes_per_sec": 0, 00:17:29.034 "w_mbytes_per_sec": 0 00:17:29.034 }, 00:17:29.034 "claimed": false, 00:17:29.034 "zoned": false, 00:17:29.034 "supported_io_types": { 00:17:29.034 "read": true, 00:17:29.034 "write": true, 00:17:29.034 "unmap": false, 00:17:29.034 "write_zeroes": true, 00:17:29.034 "flush": false, 00:17:29.034 "reset": true, 00:17:29.034 "compare": false, 00:17:29.034 "compare_and_write": false, 00:17:29.034 "abort": false, 00:17:29.034 "nvme_admin": false, 00:17:29.034 "nvme_io": false 00:17:29.034 }, 00:17:29.034 "memory_domains": [ 00:17:29.034 { 00:17:29.034 "dma_device_id": "system", 00:17:29.034 "dma_device_type": 1 00:17:29.034 }, 00:17:29.034 { 00:17:29.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.034 "dma_device_type": 2 00:17:29.034 }, 00:17:29.034 { 00:17:29.034 "dma_device_id": "system", 00:17:29.034 "dma_device_type": 1 00:17:29.034 }, 00:17:29.034 { 00:17:29.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.034 "dma_device_type": 2 00:17:29.034 }, 00:17:29.034 { 00:17:29.034 "dma_device_id": "system", 00:17:29.034 "dma_device_type": 1 00:17:29.034 }, 00:17:29.034 { 00:17:29.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.034 "dma_device_type": 2 00:17:29.034 } 00:17:29.034 ], 00:17:29.034 "driver_specific": { 00:17:29.034 "raid": { 00:17:29.034 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:29.034 "strip_size_kb": 0, 00:17:29.034 "state": "online", 00:17:29.034 "raid_level": "raid1", 00:17:29.034 "superblock": true, 00:17:29.034 "num_base_bdevs": 3, 00:17:29.034 "num_base_bdevs_discovered": 3, 00:17:29.034 "num_base_bdevs_operational": 3, 00:17:29.034 "base_bdevs_list": [ 00:17:29.034 { 00:17:29.034 "name": "pt1", 00:17:29.034 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:29.034 "is_configured": true, 00:17:29.034 "data_offset": 2048, 00:17:29.034 "data_size": 63488 00:17:29.034 }, 00:17:29.034 { 00:17:29.034 "name": "pt2", 00:17:29.034 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:29.034 "is_configured": true, 00:17:29.034 "data_offset": 2048, 00:17:29.034 "data_size": 63488 00:17:29.034 }, 00:17:29.034 { 00:17:29.034 "name": "pt3", 00:17:29.034 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:29.034 "is_configured": true, 00:17:29.034 "data_offset": 2048, 00:17:29.034 "data_size": 63488 00:17:29.034 } 00:17:29.034 ] 00:17:29.034 } 00:17:29.034 } 00:17:29.034 }' 00:17:29.034 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.292 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:29.292 pt2 00:17:29.292 pt3' 00:17:29.292 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:29.292 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:29.292 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:29.551 "name": "pt1", 00:17:29.551 "aliases": [ 00:17:29.551 "2b3c25dd-ba79-5934-8be2-3d93a91c96d3" 00:17:29.551 ], 00:17:29.551 "product_name": "passthru", 00:17:29.551 "block_size": 512, 00:17:29.551 "num_blocks": 65536, 00:17:29.551 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:29.551 "assigned_rate_limits": { 00:17:29.551 "rw_ios_per_sec": 0, 00:17:29.551 "rw_mbytes_per_sec": 0, 00:17:29.551 "r_mbytes_per_sec": 0, 00:17:29.551 "w_mbytes_per_sec": 0 00:17:29.551 }, 00:17:29.551 "claimed": true, 00:17:29.551 "claim_type": "exclusive_write", 00:17:29.551 "zoned": false, 00:17:29.551 "supported_io_types": { 00:17:29.551 "read": true, 00:17:29.551 "write": true, 00:17:29.551 "unmap": true, 00:17:29.551 "write_zeroes": true, 00:17:29.551 "flush": true, 00:17:29.551 "reset": true, 00:17:29.551 "compare": false, 00:17:29.551 "compare_and_write": false, 00:17:29.551 "abort": true, 00:17:29.551 "nvme_admin": false, 00:17:29.551 "nvme_io": false 00:17:29.551 }, 00:17:29.551 "memory_domains": [ 00:17:29.551 { 00:17:29.551 "dma_device_id": "system", 00:17:29.551 "dma_device_type": 1 00:17:29.551 }, 00:17:29.551 { 00:17:29.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.551 "dma_device_type": 2 00:17:29.551 } 00:17:29.551 ], 00:17:29.551 "driver_specific": { 00:17:29.551 "passthru": { 00:17:29.551 "name": "pt1", 00:17:29.551 "base_bdev_name": "malloc1" 00:17:29.551 } 00:17:29.551 } 00:17:29.551 }' 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:29.551 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:29.810 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.810 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:29.810 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:29.810 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:29.810 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:29.810 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:29.810 23:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:30.070 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:30.070 "name": "pt2", 00:17:30.070 "aliases": [ 00:17:30.070 "ce0b642d-879a-5544-8432-945ac184a2b8" 00:17:30.070 ], 00:17:30.070 "product_name": "passthru", 00:17:30.070 "block_size": 512, 00:17:30.070 "num_blocks": 65536, 00:17:30.070 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:30.070 "assigned_rate_limits": { 00:17:30.070 "rw_ios_per_sec": 0, 00:17:30.070 "rw_mbytes_per_sec": 0, 00:17:30.070 "r_mbytes_per_sec": 0, 00:17:30.070 "w_mbytes_per_sec": 0 00:17:30.070 }, 00:17:30.070 "claimed": true, 00:17:30.070 "claim_type": "exclusive_write", 00:17:30.070 "zoned": false, 00:17:30.070 "supported_io_types": { 00:17:30.070 "read": true, 00:17:30.070 "write": true, 00:17:30.070 "unmap": true, 00:17:30.070 "write_zeroes": true, 00:17:30.070 "flush": true, 00:17:30.070 "reset": true, 00:17:30.070 "compare": false, 00:17:30.070 "compare_and_write": false, 00:17:30.070 "abort": true, 00:17:30.070 "nvme_admin": false, 00:17:30.070 "nvme_io": false 00:17:30.070 }, 00:17:30.070 "memory_domains": [ 00:17:30.070 { 00:17:30.070 "dma_device_id": "system", 00:17:30.070 "dma_device_type": 1 00:17:30.070 }, 00:17:30.070 { 00:17:30.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.070 "dma_device_type": 2 00:17:30.070 } 00:17:30.070 ], 00:17:30.070 "driver_specific": { 00:17:30.070 "passthru": { 00:17:30.070 "name": "pt2", 00:17:30.070 "base_bdev_name": "malloc2" 00:17:30.070 } 00:17:30.070 } 00:17:30.070 }' 00:17:30.070 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:30.070 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:30.070 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:30.070 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:30.331 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:30.898 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:30.898 "name": "pt3", 00:17:30.898 "aliases": [ 00:17:30.898 "68fec638-9807-5731-8901-f5c5994f9c0e" 00:17:30.898 ], 00:17:30.898 "product_name": "passthru", 00:17:30.898 "block_size": 512, 00:17:30.898 "num_blocks": 65536, 00:17:30.898 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:30.898 "assigned_rate_limits": { 00:17:30.898 "rw_ios_per_sec": 0, 00:17:30.898 "rw_mbytes_per_sec": 0, 00:17:30.898 "r_mbytes_per_sec": 0, 00:17:30.898 "w_mbytes_per_sec": 0 00:17:30.898 }, 00:17:30.898 "claimed": true, 00:17:30.898 "claim_type": "exclusive_write", 00:17:30.898 "zoned": false, 00:17:30.898 "supported_io_types": { 00:17:30.898 "read": true, 00:17:30.898 "write": true, 00:17:30.899 "unmap": true, 00:17:30.899 "write_zeroes": true, 00:17:30.899 "flush": true, 00:17:30.899 "reset": true, 00:17:30.899 "compare": false, 00:17:30.899 "compare_and_write": false, 00:17:30.899 "abort": true, 00:17:30.899 "nvme_admin": false, 00:17:30.899 "nvme_io": false 00:17:30.899 }, 00:17:30.899 "memory_domains": [ 00:17:30.899 { 00:17:30.899 "dma_device_id": "system", 00:17:30.899 "dma_device_type": 1 00:17:30.899 }, 00:17:30.899 { 00:17:30.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.899 "dma_device_type": 2 00:17:30.899 } 00:17:30.899 ], 00:17:30.899 "driver_specific": { 00:17:30.899 "passthru": { 00:17:30.899 "name": "pt3", 00:17:30.899 "base_bdev_name": "malloc3" 00:17:30.899 } 00:17:30.899 } 00:17:30.899 }' 00:17:30.899 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:30.899 23:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:30.899 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:30.899 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:30.899 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:31.157 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:31.725 [2024-05-14 23:32:54.736296] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.725 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0b4dd769-26fc-40ce-b10c-f0638baacb0a 00:17:31.725 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0b4dd769-26fc-40ce-b10c-f0638baacb0a ']' 00:17:31.725 23:32:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:31.984 [2024-05-14 23:32:55.020283] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.984 [2024-05-14 23:32:55.020356] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.984 [2024-05-14 23:32:55.020477] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.984 [2024-05-14 23:32:55.020560] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.984 [2024-05-14 23:32:55.020575] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:17:31.984 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.984 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:31.984 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:31.984 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:31.984 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:31.984 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:32.243 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:32.243 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:32.501 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:32.502 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:32.760 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:32.760 23:32:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.019 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.020 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.020 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:33.020 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:33.278 [2024-05-14 23:32:56.360337] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:33.278 [2024-05-14 23:32:56.362023] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:33.278 [2024-05-14 23:32:56.362084] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:33.278 [2024-05-14 23:32:56.362131] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:33.278 [2024-05-14 23:32:56.362218] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:33.278 [2024-05-14 23:32:56.362254] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:33.278 [2024-05-14 23:32:56.362306] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.278 [2024-05-14 23:32:56.362320] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:17:33.278 request: 00:17:33.278 { 00:17:33.278 "name": "raid_bdev1", 00:17:33.278 "raid_level": "raid1", 00:17:33.278 "base_bdevs": [ 00:17:33.278 "malloc1", 00:17:33.278 "malloc2", 00:17:33.278 "malloc3" 00:17:33.278 ], 00:17:33.278 "superblock": false, 00:17:33.278 "method": "bdev_raid_create", 00:17:33.278 "req_id": 1 00:17:33.278 } 00:17:33.278 Got JSON-RPC error response 00:17:33.278 response: 00:17:33.278 { 00:17:33.278 "code": -17, 00:17:33.278 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:33.278 } 00:17:33.278 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:33.278 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:33.278 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:33.278 23:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:33.278 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.278 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:33.538 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:33.538 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:33.538 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.797 [2024-05-14 23:32:56.900343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.797 [2024-05-14 23:32:56.900438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.797 [2024-05-14 23:32:56.900490] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d680 00:17:33.797 [2024-05-14 23:32:56.900513] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.797 [2024-05-14 23:32:56.902161] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.797 [2024-05-14 23:32:56.902209] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.797 [2024-05-14 23:32:56.902320] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:33.797 [2024-05-14 23:32:56.902386] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.797 pt1 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.797 23:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.055 23:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:34.055 "name": "raid_bdev1", 00:17:34.055 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:34.055 "strip_size_kb": 0, 00:17:34.055 "state": "configuring", 00:17:34.055 "raid_level": "raid1", 00:17:34.055 "superblock": true, 00:17:34.055 "num_base_bdevs": 3, 00:17:34.055 "num_base_bdevs_discovered": 1, 00:17:34.055 "num_base_bdevs_operational": 3, 00:17:34.055 "base_bdevs_list": [ 00:17:34.055 { 00:17:34.055 "name": "pt1", 00:17:34.055 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:34.055 "is_configured": true, 00:17:34.055 "data_offset": 2048, 00:17:34.055 "data_size": 63488 00:17:34.055 }, 00:17:34.055 { 00:17:34.055 "name": null, 00:17:34.055 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:34.055 "is_configured": false, 00:17:34.055 "data_offset": 2048, 00:17:34.055 "data_size": 63488 00:17:34.055 }, 00:17:34.055 { 00:17:34.055 "name": null, 00:17:34.055 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:34.055 "is_configured": false, 00:17:34.055 "data_offset": 2048, 00:17:34.055 "data_size": 63488 00:17:34.055 } 00:17:34.055 ] 00:17:34.055 }' 00:17:34.055 23:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:34.055 23:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.622 23:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:34.622 23:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.881 [2024-05-14 23:32:58.024513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.881 [2024-05-14 23:32:58.024612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.881 [2024-05-14 23:32:58.024672] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ee80 00:17:34.881 [2024-05-14 23:32:58.024696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.881 [2024-05-14 23:32:58.025057] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.881 [2024-05-14 23:32:58.025093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.881 [2024-05-14 23:32:58.025447] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:34.881 [2024-05-14 23:32:58.025492] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.881 pt2 00:17:34.881 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:35.140 [2024-05-14 23:32:58.228524] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.140 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.398 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.398 "name": "raid_bdev1", 00:17:35.398 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:35.398 "strip_size_kb": 0, 00:17:35.398 "state": "configuring", 00:17:35.398 "raid_level": "raid1", 00:17:35.398 "superblock": true, 00:17:35.398 "num_base_bdevs": 3, 00:17:35.398 "num_base_bdevs_discovered": 1, 00:17:35.398 "num_base_bdevs_operational": 3, 00:17:35.398 "base_bdevs_list": [ 00:17:35.398 { 00:17:35.398 "name": "pt1", 00:17:35.398 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:35.398 "is_configured": true, 00:17:35.398 "data_offset": 2048, 00:17:35.398 "data_size": 63488 00:17:35.398 }, 00:17:35.398 { 00:17:35.398 "name": null, 00:17:35.398 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:35.398 "is_configured": false, 00:17:35.398 "data_offset": 2048, 00:17:35.398 "data_size": 63488 00:17:35.398 }, 00:17:35.399 { 00:17:35.399 "name": null, 00:17:35.399 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:35.399 "is_configured": false, 00:17:35.399 "data_offset": 2048, 00:17:35.399 "data_size": 63488 00:17:35.399 } 00:17:35.399 ] 00:17:35.399 }' 00:17:35.399 23:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.399 23:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.335 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:36.335 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:36.335 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.335 [2024-05-14 23:32:59.573068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.335 [2024-05-14 23:32:59.573393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.335 [2024-05-14 23:32:59.573451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030680 00:17:36.335 [2024-05-14 23:32:59.573481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.335 [2024-05-14 23:32:59.573843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.335 [2024-05-14 23:32:59.573880] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.335 [2024-05-14 23:32:59.573982] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:36.335 [2024-05-14 23:32:59.574008] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.335 pt2 00:17:36.335 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:36.335 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:36.335 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:36.595 [2024-05-14 23:32:59.793115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:36.595 [2024-05-14 23:32:59.793222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.595 [2024-05-14 23:32:59.793269] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031b80 00:17:36.595 [2024-05-14 23:32:59.793298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.595 [2024-05-14 23:32:59.793649] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.595 [2024-05-14 23:32:59.793693] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:36.595 [2024-05-14 23:32:59.793797] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:36.595 [2024-05-14 23:32:59.793824] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:36.595 [2024-05-14 23:32:59.793912] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:17:36.595 [2024-05-14 23:32:59.793926] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:36.595 [2024-05-14 23:32:59.794009] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:36.595 [2024-05-14 23:32:59.794248] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:17:36.595 [2024-05-14 23:32:59.794264] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:17:36.595 [2024-05-14 23:32:59.794360] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.595 pt3 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.595 23:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.863 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.863 "name": "raid_bdev1", 00:17:36.863 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:36.863 "strip_size_kb": 0, 00:17:36.863 "state": "online", 00:17:36.863 "raid_level": "raid1", 00:17:36.863 "superblock": true, 00:17:36.863 "num_base_bdevs": 3, 00:17:36.863 "num_base_bdevs_discovered": 3, 00:17:36.864 "num_base_bdevs_operational": 3, 00:17:36.864 "base_bdevs_list": [ 00:17:36.864 { 00:17:36.864 "name": "pt1", 00:17:36.864 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:36.864 "is_configured": true, 00:17:36.864 "data_offset": 2048, 00:17:36.864 "data_size": 63488 00:17:36.864 }, 00:17:36.864 { 00:17:36.864 "name": "pt2", 00:17:36.864 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:36.864 "is_configured": true, 00:17:36.864 "data_offset": 2048, 00:17:36.864 "data_size": 63488 00:17:36.864 }, 00:17:36.864 { 00:17:36.864 "name": "pt3", 00:17:36.864 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:36.864 "is_configured": true, 00:17:36.864 "data_offset": 2048, 00:17:36.864 "data_size": 63488 00:17:36.864 } 00:17:36.864 ] 00:17:36.864 }' 00:17:36.864 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.864 23:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.523 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:37.523 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:17:37.523 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:17:37.523 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:17:37.523 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:17:37.523 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:17:37.523 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.523 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:17:37.782 [2024-05-14 23:33:00.901428] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.782 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:17:37.782 "name": "raid_bdev1", 00:17:37.782 "aliases": [ 00:17:37.782 "0b4dd769-26fc-40ce-b10c-f0638baacb0a" 00:17:37.782 ], 00:17:37.782 "product_name": "Raid Volume", 00:17:37.782 "block_size": 512, 00:17:37.782 "num_blocks": 63488, 00:17:37.782 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:37.782 "assigned_rate_limits": { 00:17:37.782 "rw_ios_per_sec": 0, 00:17:37.782 "rw_mbytes_per_sec": 0, 00:17:37.782 "r_mbytes_per_sec": 0, 00:17:37.782 "w_mbytes_per_sec": 0 00:17:37.782 }, 00:17:37.782 "claimed": false, 00:17:37.782 "zoned": false, 00:17:37.782 "supported_io_types": { 00:17:37.782 "read": true, 00:17:37.782 "write": true, 00:17:37.782 "unmap": false, 00:17:37.782 "write_zeroes": true, 00:17:37.782 "flush": false, 00:17:37.782 "reset": true, 00:17:37.782 "compare": false, 00:17:37.783 "compare_and_write": false, 00:17:37.783 "abort": false, 00:17:37.783 "nvme_admin": false, 00:17:37.783 "nvme_io": false 00:17:37.783 }, 00:17:37.783 "memory_domains": [ 00:17:37.783 { 00:17:37.783 "dma_device_id": "system", 00:17:37.783 "dma_device_type": 1 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.783 "dma_device_type": 2 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "dma_device_id": "system", 00:17:37.783 "dma_device_type": 1 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.783 "dma_device_type": 2 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "dma_device_id": "system", 00:17:37.783 "dma_device_type": 1 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.783 "dma_device_type": 2 00:17:37.783 } 00:17:37.783 ], 00:17:37.783 "driver_specific": { 00:17:37.783 "raid": { 00:17:37.783 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:37.783 "strip_size_kb": 0, 00:17:37.783 "state": "online", 00:17:37.783 "raid_level": "raid1", 00:17:37.783 "superblock": true, 00:17:37.783 "num_base_bdevs": 3, 00:17:37.783 "num_base_bdevs_discovered": 3, 00:17:37.783 "num_base_bdevs_operational": 3, 00:17:37.783 "base_bdevs_list": [ 00:17:37.783 { 00:17:37.783 "name": "pt1", 00:17:37.783 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:37.783 "is_configured": true, 00:17:37.783 "data_offset": 2048, 00:17:37.783 "data_size": 63488 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "name": "pt2", 00:17:37.783 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:37.783 "is_configured": true, 00:17:37.783 "data_offset": 2048, 00:17:37.783 "data_size": 63488 00:17:37.783 }, 00:17:37.783 { 00:17:37.783 "name": "pt3", 00:17:37.783 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:37.783 "is_configured": true, 00:17:37.783 "data_offset": 2048, 00:17:37.783 "data_size": 63488 00:17:37.783 } 00:17:37.783 ] 00:17:37.783 } 00:17:37.783 } 00:17:37.783 }' 00:17:37.783 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.783 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:17:37.783 pt2 00:17:37.783 pt3' 00:17:37.783 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:37.783 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:37.783 23:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:38.042 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:38.042 "name": "pt1", 00:17:38.042 "aliases": [ 00:17:38.042 "2b3c25dd-ba79-5934-8be2-3d93a91c96d3" 00:17:38.042 ], 00:17:38.042 "product_name": "passthru", 00:17:38.042 "block_size": 512, 00:17:38.042 "num_blocks": 65536, 00:17:38.042 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:38.042 "assigned_rate_limits": { 00:17:38.042 "rw_ios_per_sec": 0, 00:17:38.042 "rw_mbytes_per_sec": 0, 00:17:38.042 "r_mbytes_per_sec": 0, 00:17:38.042 "w_mbytes_per_sec": 0 00:17:38.042 }, 00:17:38.042 "claimed": true, 00:17:38.042 "claim_type": "exclusive_write", 00:17:38.042 "zoned": false, 00:17:38.042 "supported_io_types": { 00:17:38.042 "read": true, 00:17:38.042 "write": true, 00:17:38.042 "unmap": true, 00:17:38.042 "write_zeroes": true, 00:17:38.042 "flush": true, 00:17:38.042 "reset": true, 00:17:38.042 "compare": false, 00:17:38.042 "compare_and_write": false, 00:17:38.042 "abort": true, 00:17:38.042 "nvme_admin": false, 00:17:38.042 "nvme_io": false 00:17:38.042 }, 00:17:38.042 "memory_domains": [ 00:17:38.042 { 00:17:38.042 "dma_device_id": "system", 00:17:38.042 "dma_device_type": 1 00:17:38.042 }, 00:17:38.042 { 00:17:38.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.042 "dma_device_type": 2 00:17:38.042 } 00:17:38.042 ], 00:17:38.042 "driver_specific": { 00:17:38.042 "passthru": { 00:17:38.042 "name": "pt1", 00:17:38.042 "base_bdev_name": "malloc1" 00:17:38.042 } 00:17:38.042 } 00:17:38.042 }' 00:17:38.042 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:38.042 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:38.042 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:38.042 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:38.301 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:38.301 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.301 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:38.301 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:38.301 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.301 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:38.301 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:38.560 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:38.560 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:38.560 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:38.560 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:38.560 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:38.560 "name": "pt2", 00:17:38.560 "aliases": [ 00:17:38.560 "ce0b642d-879a-5544-8432-945ac184a2b8" 00:17:38.560 ], 00:17:38.560 "product_name": "passthru", 00:17:38.560 "block_size": 512, 00:17:38.560 "num_blocks": 65536, 00:17:38.561 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:38.561 "assigned_rate_limits": { 00:17:38.561 "rw_ios_per_sec": 0, 00:17:38.561 "rw_mbytes_per_sec": 0, 00:17:38.561 "r_mbytes_per_sec": 0, 00:17:38.561 "w_mbytes_per_sec": 0 00:17:38.561 }, 00:17:38.561 "claimed": true, 00:17:38.561 "claim_type": "exclusive_write", 00:17:38.561 "zoned": false, 00:17:38.561 "supported_io_types": { 00:17:38.561 "read": true, 00:17:38.561 "write": true, 00:17:38.561 "unmap": true, 00:17:38.561 "write_zeroes": true, 00:17:38.561 "flush": true, 00:17:38.561 "reset": true, 00:17:38.561 "compare": false, 00:17:38.561 "compare_and_write": false, 00:17:38.561 "abort": true, 00:17:38.561 "nvme_admin": false, 00:17:38.561 "nvme_io": false 00:17:38.561 }, 00:17:38.561 "memory_domains": [ 00:17:38.561 { 00:17:38.561 "dma_device_id": "system", 00:17:38.561 "dma_device_type": 1 00:17:38.561 }, 00:17:38.561 { 00:17:38.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.561 "dma_device_type": 2 00:17:38.561 } 00:17:38.561 ], 00:17:38.561 "driver_specific": { 00:17:38.561 "passthru": { 00:17:38.561 "name": "pt2", 00:17:38.561 "base_bdev_name": "malloc2" 00:17:38.561 } 00:17:38.561 } 00:17:38.561 }' 00:17:38.561 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:38.820 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:38.820 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:38.820 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:38.820 23:33:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:38.820 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.820 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:39.078 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:39.078 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.078 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:39.078 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:39.078 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:39.078 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:17:39.078 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:39.078 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:17:39.337 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:17:39.337 "name": "pt3", 00:17:39.337 "aliases": [ 00:17:39.337 "68fec638-9807-5731-8901-f5c5994f9c0e" 00:17:39.337 ], 00:17:39.337 "product_name": "passthru", 00:17:39.337 "block_size": 512, 00:17:39.337 "num_blocks": 65536, 00:17:39.337 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:39.337 "assigned_rate_limits": { 00:17:39.337 "rw_ios_per_sec": 0, 00:17:39.337 "rw_mbytes_per_sec": 0, 00:17:39.337 "r_mbytes_per_sec": 0, 00:17:39.337 "w_mbytes_per_sec": 0 00:17:39.337 }, 00:17:39.337 "claimed": true, 00:17:39.337 "claim_type": "exclusive_write", 00:17:39.337 "zoned": false, 00:17:39.337 "supported_io_types": { 00:17:39.337 "read": true, 00:17:39.337 "write": true, 00:17:39.337 "unmap": true, 00:17:39.337 "write_zeroes": true, 00:17:39.337 "flush": true, 00:17:39.337 "reset": true, 00:17:39.337 "compare": false, 00:17:39.337 "compare_and_write": false, 00:17:39.337 "abort": true, 00:17:39.337 "nvme_admin": false, 00:17:39.337 "nvme_io": false 00:17:39.337 }, 00:17:39.337 "memory_domains": [ 00:17:39.337 { 00:17:39.337 "dma_device_id": "system", 00:17:39.337 "dma_device_type": 1 00:17:39.337 }, 00:17:39.337 { 00:17:39.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.337 "dma_device_type": 2 00:17:39.337 } 00:17:39.337 ], 00:17:39.337 "driver_specific": { 00:17:39.337 "passthru": { 00:17:39.337 "name": "pt3", 00:17:39.337 "base_bdev_name": "malloc3" 00:17:39.337 } 00:17:39.337 } 00:17:39.337 }' 00:17:39.337 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:39.337 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:17:39.596 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:17:39.596 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:39.596 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:17:39.596 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.596 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:39.596 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:17:39.596 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.596 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:39.855 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:17:39.855 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:17:39.855 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:39.855 23:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:40.114 [2024-05-14 23:33:03.181694] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.114 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0b4dd769-26fc-40ce-b10c-f0638baacb0a '!=' 0b4dd769-26fc-40ce-b10c-f0638baacb0a ']' 00:17:40.114 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:40.114 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:17:40.114 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:17:40.114 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:40.114 [2024-05-14 23:33:03.397612] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.372 "name": "raid_bdev1", 00:17:40.372 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:40.372 "strip_size_kb": 0, 00:17:40.372 "state": "online", 00:17:40.372 "raid_level": "raid1", 00:17:40.372 "superblock": true, 00:17:40.372 "num_base_bdevs": 3, 00:17:40.372 "num_base_bdevs_discovered": 2, 00:17:40.372 "num_base_bdevs_operational": 2, 00:17:40.372 "base_bdevs_list": [ 00:17:40.372 { 00:17:40.372 "name": null, 00:17:40.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.372 "is_configured": false, 00:17:40.372 "data_offset": 2048, 00:17:40.372 "data_size": 63488 00:17:40.372 }, 00:17:40.372 { 00:17:40.372 "name": "pt2", 00:17:40.372 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:40.372 "is_configured": true, 00:17:40.372 "data_offset": 2048, 00:17:40.372 "data_size": 63488 00:17:40.372 }, 00:17:40.372 { 00:17:40.372 "name": "pt3", 00:17:40.372 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:40.372 "is_configured": true, 00:17:40.372 "data_offset": 2048, 00:17:40.372 "data_size": 63488 00:17:40.372 } 00:17:40.372 ] 00:17:40.372 }' 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.372 23:33:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.307 23:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:41.307 [2024-05-14 23:33:04.542439] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.307 [2024-05-14 23:33:04.542486] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.307 [2024-05-14 23:33:04.542556] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.307 [2024-05-14 23:33:04.542604] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.307 [2024-05-14 23:33:04.542616] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:17:41.307 23:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.307 23:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:41.566 23:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:41.566 23:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:41.566 23:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:41.566 23:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:41.566 23:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:41.824 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:41.824 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:41.824 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:42.105 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:42.105 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:42.105 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:42.105 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:42.105 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.379 [2024-05-14 23:33:05.542593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.379 [2024-05-14 23:33:05.542729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.379 [2024-05-14 23:33:05.542785] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033080 00:17:42.379 [2024-05-14 23:33:05.542820] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.379 [2024-05-14 23:33:05.544907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.379 [2024-05-14 23:33:05.544970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.379 [2024-05-14 23:33:05.545091] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:42.379 [2024-05-14 23:33:05.545173] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.379 pt2 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.379 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.638 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.638 "name": "raid_bdev1", 00:17:42.638 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:42.638 "strip_size_kb": 0, 00:17:42.638 "state": "configuring", 00:17:42.638 "raid_level": "raid1", 00:17:42.638 "superblock": true, 00:17:42.638 "num_base_bdevs": 3, 00:17:42.638 "num_base_bdevs_discovered": 1, 00:17:42.638 "num_base_bdevs_operational": 2, 00:17:42.638 "base_bdevs_list": [ 00:17:42.638 { 00:17:42.638 "name": null, 00:17:42.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.638 "is_configured": false, 00:17:42.638 "data_offset": 2048, 00:17:42.638 "data_size": 63488 00:17:42.638 }, 00:17:42.638 { 00:17:42.638 "name": "pt2", 00:17:42.638 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:42.638 "is_configured": true, 00:17:42.638 "data_offset": 2048, 00:17:42.638 "data_size": 63488 00:17:42.638 }, 00:17:42.638 { 00:17:42.638 "name": null, 00:17:42.638 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:42.638 "is_configured": false, 00:17:42.638 "data_offset": 2048, 00:17:42.638 "data_size": 63488 00:17:42.638 } 00:17:42.638 ] 00:17:42.638 }' 00:17:42.638 23:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.638 23:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.574 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:43.574 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:43.574 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:43.574 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:43.832 [2024-05-14 23:33:06.870724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:43.832 [2024-05-14 23:33:06.870840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.832 [2024-05-14 23:33:06.870896] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034880 00:17:43.832 [2024-05-14 23:33:06.870923] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.832 [2024-05-14 23:33:06.871606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.832 [2024-05-14 23:33:06.871644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:43.833 [2024-05-14 23:33:06.871758] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:43.833 [2024-05-14 23:33:06.871796] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:43.833 [2024-05-14 23:33:06.871884] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:17:43.833 [2024-05-14 23:33:06.871898] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:43.833 [2024-05-14 23:33:06.871999] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:43.833 [2024-05-14 23:33:06.872272] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:17:43.833 [2024-05-14 23:33:06.872289] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:17:43.833 [2024-05-14 23:33:06.872389] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.833 pt3 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.833 23:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.091 23:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.091 "name": "raid_bdev1", 00:17:44.091 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:44.091 "strip_size_kb": 0, 00:17:44.091 "state": "online", 00:17:44.091 "raid_level": "raid1", 00:17:44.091 "superblock": true, 00:17:44.091 "num_base_bdevs": 3, 00:17:44.091 "num_base_bdevs_discovered": 2, 00:17:44.091 "num_base_bdevs_operational": 2, 00:17:44.091 "base_bdevs_list": [ 00:17:44.091 { 00:17:44.091 "name": null, 00:17:44.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.091 "is_configured": false, 00:17:44.091 "data_offset": 2048, 00:17:44.091 "data_size": 63488 00:17:44.091 }, 00:17:44.091 { 00:17:44.091 "name": "pt2", 00:17:44.091 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:44.091 "is_configured": true, 00:17:44.091 "data_offset": 2048, 00:17:44.091 "data_size": 63488 00:17:44.091 }, 00:17:44.091 { 00:17:44.091 "name": "pt3", 00:17:44.091 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:44.091 "is_configured": true, 00:17:44.091 "data_offset": 2048, 00:17:44.091 "data_size": 63488 00:17:44.091 } 00:17:44.091 ] 00:17:44.091 }' 00:17:44.091 23:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.091 23:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.660 23:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 3 -gt 2 ']' 00:17:44.660 23:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:44.918 [2024-05-14 23:33:08.194917] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.918 [2024-05-14 23:33:08.194966] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.918 [2024-05-14 23:33:08.195041] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.918 [2024-05-14 23:33:08.195089] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.918 [2024-05-14 23:33:08.195100] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:17:45.177 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.177 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # jq -r '.[]' 00:17:45.435 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # raid_bdev= 00:17:45.435 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@529 -- # '[' -n '' ']' 00:17:45.435 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:45.693 [2024-05-14 23:33:08.806946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:45.693 [2024-05-14 23:33:08.807062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.693 [2024-05-14 23:33:08.807124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035d80 00:17:45.693 [2024-05-14 23:33:08.807424] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.693 pt1 00:17:45.694 [2024-05-14 23:33:08.809205] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.694 [2024-05-14 23:33:08.809245] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:45.694 [2024-05-14 23:33:08.809362] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:45.694 [2024-05-14 23:33:08.809416] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.694 23:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.952 23:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.952 "name": "raid_bdev1", 00:17:45.952 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:45.952 "strip_size_kb": 0, 00:17:45.952 "state": "configuring", 00:17:45.952 "raid_level": "raid1", 00:17:45.952 "superblock": true, 00:17:45.952 "num_base_bdevs": 3, 00:17:45.952 "num_base_bdevs_discovered": 1, 00:17:45.952 "num_base_bdevs_operational": 3, 00:17:45.952 "base_bdevs_list": [ 00:17:45.952 { 00:17:45.952 "name": "pt1", 00:17:45.952 "uuid": "2b3c25dd-ba79-5934-8be2-3d93a91c96d3", 00:17:45.952 "is_configured": true, 00:17:45.952 "data_offset": 2048, 00:17:45.952 "data_size": 63488 00:17:45.952 }, 00:17:45.952 { 00:17:45.952 "name": null, 00:17:45.952 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:45.952 "is_configured": false, 00:17:45.952 "data_offset": 2048, 00:17:45.952 "data_size": 63488 00:17:45.952 }, 00:17:45.952 { 00:17:45.952 "name": null, 00:17:45.952 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:45.952 "is_configured": false, 00:17:45.952 "data_offset": 2048, 00:17:45.952 "data_size": 63488 00:17:45.952 } 00:17:45.952 ] 00:17:45.952 }' 00:17:45.952 23:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.952 23:33:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.891 23:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i = 1 )) 00:17:46.891 23:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:17:46.891 23:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:46.891 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:17:46.891 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:17:46.891 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:47.150 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:17:47.150 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:17:47.150 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # i=2 00:17:47.150 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:47.409 [2024-05-14 23:33:10.547312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:47.409 [2024-05-14 23:33:10.547473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.409 [2024-05-14 23:33:10.547558] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000037580 00:17:47.409 [2024-05-14 23:33:10.547618] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.409 [2024-05-14 23:33:10.548548] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.409 [2024-05-14 23:33:10.548616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:47.409 [2024-05-14 23:33:10.548777] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:47.409 [2024-05-14 23:33:10.548806] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:47.409 [2024-05-14 23:33:10.548822] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.409 [2024-05-14 23:33:10.548852] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state configuring 00:17:47.409 [2024-05-14 23:33:10.548963] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:47.409 pt3 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@551 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.409 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.668 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.668 "name": "raid_bdev1", 00:17:47.668 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:47.668 "strip_size_kb": 0, 00:17:47.668 "state": "configuring", 00:17:47.668 "raid_level": "raid1", 00:17:47.668 "superblock": true, 00:17:47.668 "num_base_bdevs": 3, 00:17:47.668 "num_base_bdevs_discovered": 1, 00:17:47.668 "num_base_bdevs_operational": 2, 00:17:47.668 "base_bdevs_list": [ 00:17:47.668 { 00:17:47.668 "name": null, 00:17:47.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.668 "is_configured": false, 00:17:47.668 "data_offset": 2048, 00:17:47.668 "data_size": 63488 00:17:47.668 }, 00:17:47.668 { 00:17:47.668 "name": null, 00:17:47.668 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:47.668 "is_configured": false, 00:17:47.668 "data_offset": 2048, 00:17:47.668 "data_size": 63488 00:17:47.668 }, 00:17:47.668 { 00:17:47.668 "name": "pt3", 00:17:47.668 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:47.668 "is_configured": true, 00:17:47.668 "data_offset": 2048, 00:17:47.668 "data_size": 63488 00:17:47.668 } 00:17:47.668 ] 00:17:47.668 }' 00:17:47.668 23:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.668 23:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.236 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i = 1 )) 00:17:48.236 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:17:48.236 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.494 [2024-05-14 23:33:11.667353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.494 [2024-05-14 23:33:11.667462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.494 [2024-05-14 23:33:11.667512] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038a80 00:17:48.494 [2024-05-14 23:33:11.667549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.494 [2024-05-14 23:33:11.667918] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.494 [2024-05-14 23:33:11.667957] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.494 [2024-05-14 23:33:11.668049] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:48.494 [2024-05-14 23:33:11.668084] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.494 [2024-05-14 23:33:11.668474] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012300 00:17:48.494 [2024-05-14 23:33:11.668496] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:48.494 [2024-05-14 23:33:11.668598] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:48.494 [2024-05-14 23:33:11.668823] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012300 00:17:48.494 [2024-05-14 23:33:11.668839] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012300 00:17:48.494 [2024-05-14 23:33:11.668936] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.494 pt2 00:17:48.494 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@559 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.495 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.755 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.755 "name": "raid_bdev1", 00:17:48.755 "uuid": "0b4dd769-26fc-40ce-b10c-f0638baacb0a", 00:17:48.755 "strip_size_kb": 0, 00:17:48.755 "state": "online", 00:17:48.755 "raid_level": "raid1", 00:17:48.755 "superblock": true, 00:17:48.755 "num_base_bdevs": 3, 00:17:48.755 "num_base_bdevs_discovered": 2, 00:17:48.755 "num_base_bdevs_operational": 2, 00:17:48.755 "base_bdevs_list": [ 00:17:48.755 { 00:17:48.755 "name": null, 00:17:48.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.755 "is_configured": false, 00:17:48.755 "data_offset": 2048, 00:17:48.755 "data_size": 63488 00:17:48.755 }, 00:17:48.755 { 00:17:48.755 "name": "pt2", 00:17:48.755 "uuid": "ce0b642d-879a-5544-8432-945ac184a2b8", 00:17:48.755 "is_configured": true, 00:17:48.755 "data_offset": 2048, 00:17:48.755 "data_size": 63488 00:17:48.755 }, 00:17:48.755 { 00:17:48.755 "name": "pt3", 00:17:48.755 "uuid": "68fec638-9807-5731-8901-f5c5994f9c0e", 00:17:48.755 "is_configured": true, 00:17:48.755 "data_offset": 2048, 00:17:48.755 "data_size": 63488 00:17:48.755 } 00:17:48.755 ] 00:17:48.755 }' 00:17:48.755 23:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.755 23:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:17:49.714 [2024-05-14 23:33:12.863672] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 0b4dd769-26fc-40ce-b10c-f0638baacb0a '!=' 0b4dd769-26fc-40ce-b10c-f0638baacb0a ']' 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 63391 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 63391 ']' 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 63391 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63391 00:17:49.714 killing process with pid 63391 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63391' 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 63391 00:17:49.714 23:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 63391 00:17:49.714 [2024-05-14 23:33:12.908584] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.714 [2024-05-14 23:33:12.908667] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.714 [2024-05-14 23:33:12.908712] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.714 [2024-05-14 23:33:12.908723] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012300 name raid_bdev1, state offline 00:17:49.972 [2024-05-14 23:33:13.161738] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.347 23:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:17:51.347 00:17:51.347 real 0m25.859s 00:17:51.347 user 0m48.657s 00:17:51.347 sys 0m2.444s 00:17:51.347 23:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:51.347 23:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 ************************************ 00:17:51.347 END TEST raid_superblock_test 00:17:51.347 ************************************ 00:17:51.347 23:33:14 bdev_raid -- bdev/bdev_raid.sh@813 -- # for n in {2..4} 00:17:51.347 23:33:14 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:17:51.347 23:33:14 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:51.347 23:33:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:51.347 23:33:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:51.347 23:33:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.347 ************************************ 00:17:51.347 START TEST raid_state_function_test 00:17:51.347 ************************************ 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:17:51.347 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:17:51.348 Process raid pid: 64203 00:17:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=64203 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 64203' 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 64203 /var/tmp/spdk-raid.sock 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 64203 ']' 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:51.348 23:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.607 [2024-05-14 23:33:14.646907] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:17:51.607 [2024-05-14 23:33:14.647104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.607 [2024-05-14 23:33:14.808386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.866 [2024-05-14 23:33:15.036749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.124 [2024-05-14 23:33:15.241089] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.383 23:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:52.383 23:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:17:52.383 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:52.642 [2024-05-14 23:33:15.783901] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.642 [2024-05-14 23:33:15.784053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.642 [2024-05-14 23:33:15.784083] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.642 [2024-05-14 23:33:15.784109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.642 [2024-05-14 23:33:15.784120] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:52.642 [2024-05-14 23:33:15.784481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:52.642 [2024-05-14 23:33:15.784506] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:52.642 [2024-05-14 23:33:15.784547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.642 23:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.901 23:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.901 "name": "Existed_Raid", 00:17:52.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.901 "strip_size_kb": 64, 00:17:52.901 "state": "configuring", 00:17:52.901 "raid_level": "raid0", 00:17:52.901 "superblock": false, 00:17:52.901 "num_base_bdevs": 4, 00:17:52.901 "num_base_bdevs_discovered": 0, 00:17:52.901 "num_base_bdevs_operational": 4, 00:17:52.901 "base_bdevs_list": [ 00:17:52.901 { 00:17:52.901 "name": "BaseBdev1", 00:17:52.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.901 "is_configured": false, 00:17:52.901 "data_offset": 0, 00:17:52.901 "data_size": 0 00:17:52.901 }, 00:17:52.901 { 00:17:52.901 "name": "BaseBdev2", 00:17:52.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.901 "is_configured": false, 00:17:52.901 "data_offset": 0, 00:17:52.901 "data_size": 0 00:17:52.901 }, 00:17:52.901 { 00:17:52.901 "name": "BaseBdev3", 00:17:52.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.901 "is_configured": false, 00:17:52.901 "data_offset": 0, 00:17:52.901 "data_size": 0 00:17:52.901 }, 00:17:52.901 { 00:17:52.901 "name": "BaseBdev4", 00:17:52.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.901 "is_configured": false, 00:17:52.901 "data_offset": 0, 00:17:52.901 "data_size": 0 00:17:52.901 } 00:17:52.901 ] 00:17:52.901 }' 00:17:52.901 23:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.901 23:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.498 23:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:53.762 [2024-05-14 23:33:16.951839] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.762 [2024-05-14 23:33:16.951891] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:17:53.762 23:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:54.021 [2024-05-14 23:33:17.151872] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.021 [2024-05-14 23:33:17.151961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.021 [2024-05-14 23:33:17.151977] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.021 [2024-05-14 23:33:17.152005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.021 [2024-05-14 23:33:17.152016] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.021 [2024-05-14 23:33:17.152034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.021 [2024-05-14 23:33:17.152043] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:54.021 [2024-05-14 23:33:17.152070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:54.021 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.279 [2024-05-14 23:33:17.396514] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.279 BaseBdev1 00:17:54.279 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:17:54.279 23:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:54.279 23:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:54.279 23:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:54.279 23:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:54.279 23:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:54.279 23:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.537 23:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.796 [ 00:17:54.796 { 00:17:54.796 "name": "BaseBdev1", 00:17:54.796 "aliases": [ 00:17:54.796 "abdf77da-643b-48e0-b47a-94ec496cae9e" 00:17:54.796 ], 00:17:54.796 "product_name": "Malloc disk", 00:17:54.796 "block_size": 512, 00:17:54.796 "num_blocks": 65536, 00:17:54.796 "uuid": "abdf77da-643b-48e0-b47a-94ec496cae9e", 00:17:54.796 "assigned_rate_limits": { 00:17:54.796 "rw_ios_per_sec": 0, 00:17:54.796 "rw_mbytes_per_sec": 0, 00:17:54.796 "r_mbytes_per_sec": 0, 00:17:54.796 "w_mbytes_per_sec": 0 00:17:54.796 }, 00:17:54.796 "claimed": true, 00:17:54.796 "claim_type": "exclusive_write", 00:17:54.796 "zoned": false, 00:17:54.796 "supported_io_types": { 00:17:54.796 "read": true, 00:17:54.796 "write": true, 00:17:54.796 "unmap": true, 00:17:54.796 "write_zeroes": true, 00:17:54.796 "flush": true, 00:17:54.796 "reset": true, 00:17:54.796 "compare": false, 00:17:54.796 "compare_and_write": false, 00:17:54.796 "abort": true, 00:17:54.796 "nvme_admin": false, 00:17:54.796 "nvme_io": false 00:17:54.796 }, 00:17:54.796 "memory_domains": [ 00:17:54.796 { 00:17:54.796 "dma_device_id": "system", 00:17:54.796 "dma_device_type": 1 00:17:54.796 }, 00:17:54.796 { 00:17:54.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.796 "dma_device_type": 2 00:17:54.796 } 00:17:54.796 ], 00:17:54.796 "driver_specific": {} 00:17:54.796 } 00:17:54.796 ] 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.796 23:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.055 23:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.055 "name": "Existed_Raid", 00:17:55.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.055 "strip_size_kb": 64, 00:17:55.055 "state": "configuring", 00:17:55.055 "raid_level": "raid0", 00:17:55.055 "superblock": false, 00:17:55.055 "num_base_bdevs": 4, 00:17:55.055 "num_base_bdevs_discovered": 1, 00:17:55.055 "num_base_bdevs_operational": 4, 00:17:55.055 "base_bdevs_list": [ 00:17:55.055 { 00:17:55.055 "name": "BaseBdev1", 00:17:55.055 "uuid": "abdf77da-643b-48e0-b47a-94ec496cae9e", 00:17:55.055 "is_configured": true, 00:17:55.055 "data_offset": 0, 00:17:55.055 "data_size": 65536 00:17:55.055 }, 00:17:55.055 { 00:17:55.055 "name": "BaseBdev2", 00:17:55.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.055 "is_configured": false, 00:17:55.055 "data_offset": 0, 00:17:55.055 "data_size": 0 00:17:55.055 }, 00:17:55.055 { 00:17:55.055 "name": "BaseBdev3", 00:17:55.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.055 "is_configured": false, 00:17:55.055 "data_offset": 0, 00:17:55.055 "data_size": 0 00:17:55.055 }, 00:17:55.055 { 00:17:55.055 "name": "BaseBdev4", 00:17:55.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.055 "is_configured": false, 00:17:55.055 "data_offset": 0, 00:17:55.055 "data_size": 0 00:17:55.055 } 00:17:55.055 ] 00:17:55.055 }' 00:17:55.055 23:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.055 23:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.621 23:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:55.879 [2024-05-14 23:33:18.928773] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.879 [2024-05-14 23:33:18.928863] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:17:55.879 23:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:55.879 [2024-05-14 23:33:19.144843] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.879 [2024-05-14 23:33:19.146635] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.879 [2024-05-14 23:33:19.146746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.879 [2024-05-14 23:33:19.146775] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.879 [2024-05-14 23:33:19.146810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.879 [2024-05-14 23:33:19.146821] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.879 [2024-05-14 23:33:19.146839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.879 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.137 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.137 "name": "Existed_Raid", 00:17:56.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.137 "strip_size_kb": 64, 00:17:56.137 "state": "configuring", 00:17:56.137 "raid_level": "raid0", 00:17:56.137 "superblock": false, 00:17:56.137 "num_base_bdevs": 4, 00:17:56.137 "num_base_bdevs_discovered": 1, 00:17:56.137 "num_base_bdevs_operational": 4, 00:17:56.137 "base_bdevs_list": [ 00:17:56.137 { 00:17:56.137 "name": "BaseBdev1", 00:17:56.137 "uuid": "abdf77da-643b-48e0-b47a-94ec496cae9e", 00:17:56.137 "is_configured": true, 00:17:56.137 "data_offset": 0, 00:17:56.137 "data_size": 65536 00:17:56.137 }, 00:17:56.137 { 00:17:56.137 "name": "BaseBdev2", 00:17:56.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.137 "is_configured": false, 00:17:56.137 "data_offset": 0, 00:17:56.137 "data_size": 0 00:17:56.137 }, 00:17:56.137 { 00:17:56.137 "name": "BaseBdev3", 00:17:56.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.137 "is_configured": false, 00:17:56.137 "data_offset": 0, 00:17:56.137 "data_size": 0 00:17:56.137 }, 00:17:56.137 { 00:17:56.137 "name": "BaseBdev4", 00:17:56.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.137 "is_configured": false, 00:17:56.137 "data_offset": 0, 00:17:56.137 "data_size": 0 00:17:56.137 } 00:17:56.138 ] 00:17:56.138 }' 00:17:56.138 23:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.138 23:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.072 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:57.072 [2024-05-14 23:33:20.322610] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.072 BaseBdev2 00:17:57.072 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:17:57.072 23:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:57.072 23:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:57.072 23:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:57.072 23:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:57.072 23:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:57.072 23:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.330 23:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.588 [ 00:17:57.588 { 00:17:57.588 "name": "BaseBdev2", 00:17:57.588 "aliases": [ 00:17:57.588 "f847e7c5-34ce-4819-a54b-6d218cc3d7e6" 00:17:57.588 ], 00:17:57.588 "product_name": "Malloc disk", 00:17:57.588 "block_size": 512, 00:17:57.588 "num_blocks": 65536, 00:17:57.588 "uuid": "f847e7c5-34ce-4819-a54b-6d218cc3d7e6", 00:17:57.588 "assigned_rate_limits": { 00:17:57.588 "rw_ios_per_sec": 0, 00:17:57.588 "rw_mbytes_per_sec": 0, 00:17:57.588 "r_mbytes_per_sec": 0, 00:17:57.588 "w_mbytes_per_sec": 0 00:17:57.588 }, 00:17:57.588 "claimed": true, 00:17:57.588 "claim_type": "exclusive_write", 00:17:57.588 "zoned": false, 00:17:57.588 "supported_io_types": { 00:17:57.588 "read": true, 00:17:57.588 "write": true, 00:17:57.588 "unmap": true, 00:17:57.588 "write_zeroes": true, 00:17:57.588 "flush": true, 00:17:57.588 "reset": true, 00:17:57.588 "compare": false, 00:17:57.588 "compare_and_write": false, 00:17:57.588 "abort": true, 00:17:57.588 "nvme_admin": false, 00:17:57.588 "nvme_io": false 00:17:57.588 }, 00:17:57.588 "memory_domains": [ 00:17:57.588 { 00:17:57.588 "dma_device_id": "system", 00:17:57.588 "dma_device_type": 1 00:17:57.588 }, 00:17:57.588 { 00:17:57.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.588 "dma_device_type": 2 00:17:57.588 } 00:17:57.588 ], 00:17:57.588 "driver_specific": {} 00:17:57.588 } 00:17:57.588 ] 00:17:57.588 23:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:57.588 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:57.588 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:57.588 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:57.588 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.588 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.588 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:57.589 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.589 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.589 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.589 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.589 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.589 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.589 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.589 23:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.848 23:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.848 "name": "Existed_Raid", 00:17:57.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.848 "strip_size_kb": 64, 00:17:57.848 "state": "configuring", 00:17:57.848 "raid_level": "raid0", 00:17:57.848 "superblock": false, 00:17:57.848 "num_base_bdevs": 4, 00:17:57.848 "num_base_bdevs_discovered": 2, 00:17:57.848 "num_base_bdevs_operational": 4, 00:17:57.848 "base_bdevs_list": [ 00:17:57.848 { 00:17:57.848 "name": "BaseBdev1", 00:17:57.848 "uuid": "abdf77da-643b-48e0-b47a-94ec496cae9e", 00:17:57.848 "is_configured": true, 00:17:57.848 "data_offset": 0, 00:17:57.848 "data_size": 65536 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "name": "BaseBdev2", 00:17:57.848 "uuid": "f847e7c5-34ce-4819-a54b-6d218cc3d7e6", 00:17:57.848 "is_configured": true, 00:17:57.848 "data_offset": 0, 00:17:57.848 "data_size": 65536 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "name": "BaseBdev3", 00:17:57.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.848 "is_configured": false, 00:17:57.848 "data_offset": 0, 00:17:57.848 "data_size": 0 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "name": "BaseBdev4", 00:17:57.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.848 "is_configured": false, 00:17:57.848 "data_offset": 0, 00:17:57.848 "data_size": 0 00:17:57.848 } 00:17:57.848 ] 00:17:57.848 }' 00:17:57.848 23:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.848 23:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.784 23:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:58.784 [2024-05-14 23:33:22.053119] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.784 BaseBdev3 00:17:58.784 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:17:58.784 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:58.784 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:58.785 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:58.785 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:58.785 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:58.785 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.043 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:59.301 [ 00:17:59.301 { 00:17:59.301 "name": "BaseBdev3", 00:17:59.302 "aliases": [ 00:17:59.302 "f4496e3c-4a63-4dd1-967e-681bda4107d4" 00:17:59.302 ], 00:17:59.302 "product_name": "Malloc disk", 00:17:59.302 "block_size": 512, 00:17:59.302 "num_blocks": 65536, 00:17:59.302 "uuid": "f4496e3c-4a63-4dd1-967e-681bda4107d4", 00:17:59.302 "assigned_rate_limits": { 00:17:59.302 "rw_ios_per_sec": 0, 00:17:59.302 "rw_mbytes_per_sec": 0, 00:17:59.302 "r_mbytes_per_sec": 0, 00:17:59.302 "w_mbytes_per_sec": 0 00:17:59.302 }, 00:17:59.302 "claimed": true, 00:17:59.302 "claim_type": "exclusive_write", 00:17:59.302 "zoned": false, 00:17:59.302 "supported_io_types": { 00:17:59.302 "read": true, 00:17:59.302 "write": true, 00:17:59.302 "unmap": true, 00:17:59.302 "write_zeroes": true, 00:17:59.302 "flush": true, 00:17:59.302 "reset": true, 00:17:59.302 "compare": false, 00:17:59.302 "compare_and_write": false, 00:17:59.302 "abort": true, 00:17:59.302 "nvme_admin": false, 00:17:59.302 "nvme_io": false 00:17:59.302 }, 00:17:59.302 "memory_domains": [ 00:17:59.302 { 00:17:59.302 "dma_device_id": "system", 00:17:59.302 "dma_device_type": 1 00:17:59.302 }, 00:17:59.302 { 00:17:59.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.302 "dma_device_type": 2 00:17:59.302 } 00:17:59.302 ], 00:17:59.302 "driver_specific": {} 00:17:59.302 } 00:17:59.302 ] 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.302 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.561 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.561 "name": "Existed_Raid", 00:17:59.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.562 "strip_size_kb": 64, 00:17:59.562 "state": "configuring", 00:17:59.562 "raid_level": "raid0", 00:17:59.562 "superblock": false, 00:17:59.562 "num_base_bdevs": 4, 00:17:59.562 "num_base_bdevs_discovered": 3, 00:17:59.562 "num_base_bdevs_operational": 4, 00:17:59.562 "base_bdevs_list": [ 00:17:59.562 { 00:17:59.562 "name": "BaseBdev1", 00:17:59.562 "uuid": "abdf77da-643b-48e0-b47a-94ec496cae9e", 00:17:59.562 "is_configured": true, 00:17:59.562 "data_offset": 0, 00:17:59.562 "data_size": 65536 00:17:59.562 }, 00:17:59.562 { 00:17:59.562 "name": "BaseBdev2", 00:17:59.562 "uuid": "f847e7c5-34ce-4819-a54b-6d218cc3d7e6", 00:17:59.562 "is_configured": true, 00:17:59.562 "data_offset": 0, 00:17:59.562 "data_size": 65536 00:17:59.562 }, 00:17:59.562 { 00:17:59.562 "name": "BaseBdev3", 00:17:59.562 "uuid": "f4496e3c-4a63-4dd1-967e-681bda4107d4", 00:17:59.562 "is_configured": true, 00:17:59.562 "data_offset": 0, 00:17:59.562 "data_size": 65536 00:17:59.562 }, 00:17:59.562 { 00:17:59.562 "name": "BaseBdev4", 00:17:59.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.562 "is_configured": false, 00:17:59.562 "data_offset": 0, 00:17:59.562 "data_size": 0 00:17:59.562 } 00:17:59.562 ] 00:17:59.562 }' 00:17:59.562 23:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.562 23:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.496 23:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:00.497 [2024-05-14 23:33:23.687199] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:00.497 [2024-05-14 23:33:23.687270] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:18:00.497 [2024-05-14 23:33:23.687280] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:00.497 [2024-05-14 23:33:23.687415] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:00.497 [2024-05-14 23:33:23.687641] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:18:00.497 [2024-05-14 23:33:23.687657] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:18:00.497 [2024-05-14 23:33:23.687863] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.497 BaseBdev4 00:18:00.497 23:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:18:00.497 23:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:00.497 23:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:00.497 23:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:00.497 23:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:00.497 23:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:00.497 23:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:00.755 23:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:01.014 [ 00:18:01.014 { 00:18:01.014 "name": "BaseBdev4", 00:18:01.014 "aliases": [ 00:18:01.014 "00be2468-ec18-473a-93b5-5a7f4f418342" 00:18:01.014 ], 00:18:01.014 "product_name": "Malloc disk", 00:18:01.014 "block_size": 512, 00:18:01.014 "num_blocks": 65536, 00:18:01.014 "uuid": "00be2468-ec18-473a-93b5-5a7f4f418342", 00:18:01.014 "assigned_rate_limits": { 00:18:01.014 "rw_ios_per_sec": 0, 00:18:01.014 "rw_mbytes_per_sec": 0, 00:18:01.014 "r_mbytes_per_sec": 0, 00:18:01.014 "w_mbytes_per_sec": 0 00:18:01.014 }, 00:18:01.014 "claimed": true, 00:18:01.014 "claim_type": "exclusive_write", 00:18:01.014 "zoned": false, 00:18:01.014 "supported_io_types": { 00:18:01.014 "read": true, 00:18:01.014 "write": true, 00:18:01.014 "unmap": true, 00:18:01.014 "write_zeroes": true, 00:18:01.014 "flush": true, 00:18:01.014 "reset": true, 00:18:01.014 "compare": false, 00:18:01.014 "compare_and_write": false, 00:18:01.014 "abort": true, 00:18:01.014 "nvme_admin": false, 00:18:01.014 "nvme_io": false 00:18:01.014 }, 00:18:01.015 "memory_domains": [ 00:18:01.015 { 00:18:01.015 "dma_device_id": "system", 00:18:01.015 "dma_device_type": 1 00:18:01.015 }, 00:18:01.015 { 00:18:01.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.015 "dma_device_type": 2 00:18:01.015 } 00:18:01.015 ], 00:18:01.015 "driver_specific": {} 00:18:01.015 } 00:18:01.015 ] 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.015 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.274 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.274 "name": "Existed_Raid", 00:18:01.274 "uuid": "52ff1c82-b805-4719-9c7b-72ac17c24b65", 00:18:01.274 "strip_size_kb": 64, 00:18:01.274 "state": "online", 00:18:01.274 "raid_level": "raid0", 00:18:01.274 "superblock": false, 00:18:01.274 "num_base_bdevs": 4, 00:18:01.274 "num_base_bdevs_discovered": 4, 00:18:01.274 "num_base_bdevs_operational": 4, 00:18:01.274 "base_bdevs_list": [ 00:18:01.274 { 00:18:01.274 "name": "BaseBdev1", 00:18:01.274 "uuid": "abdf77da-643b-48e0-b47a-94ec496cae9e", 00:18:01.274 "is_configured": true, 00:18:01.274 "data_offset": 0, 00:18:01.274 "data_size": 65536 00:18:01.274 }, 00:18:01.274 { 00:18:01.274 "name": "BaseBdev2", 00:18:01.274 "uuid": "f847e7c5-34ce-4819-a54b-6d218cc3d7e6", 00:18:01.274 "is_configured": true, 00:18:01.274 "data_offset": 0, 00:18:01.274 "data_size": 65536 00:18:01.274 }, 00:18:01.274 { 00:18:01.274 "name": "BaseBdev3", 00:18:01.274 "uuid": "f4496e3c-4a63-4dd1-967e-681bda4107d4", 00:18:01.274 "is_configured": true, 00:18:01.274 "data_offset": 0, 00:18:01.274 "data_size": 65536 00:18:01.274 }, 00:18:01.274 { 00:18:01.274 "name": "BaseBdev4", 00:18:01.274 "uuid": "00be2468-ec18-473a-93b5-5a7f4f418342", 00:18:01.274 "is_configured": true, 00:18:01.274 "data_offset": 0, 00:18:01.274 "data_size": 65536 00:18:01.274 } 00:18:01.274 ] 00:18:01.274 }' 00:18:01.274 23:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.274 23:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.842 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:18:01.842 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:01.842 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:01.842 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:01.842 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:01.842 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:18:01.842 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:01.842 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:02.101 [2024-05-14 23:33:25.359596] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.101 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:02.101 "name": "Existed_Raid", 00:18:02.101 "aliases": [ 00:18:02.101 "52ff1c82-b805-4719-9c7b-72ac17c24b65" 00:18:02.101 ], 00:18:02.101 "product_name": "Raid Volume", 00:18:02.101 "block_size": 512, 00:18:02.101 "num_blocks": 262144, 00:18:02.101 "uuid": "52ff1c82-b805-4719-9c7b-72ac17c24b65", 00:18:02.101 "assigned_rate_limits": { 00:18:02.101 "rw_ios_per_sec": 0, 00:18:02.101 "rw_mbytes_per_sec": 0, 00:18:02.101 "r_mbytes_per_sec": 0, 00:18:02.101 "w_mbytes_per_sec": 0 00:18:02.101 }, 00:18:02.101 "claimed": false, 00:18:02.101 "zoned": false, 00:18:02.101 "supported_io_types": { 00:18:02.101 "read": true, 00:18:02.101 "write": true, 00:18:02.101 "unmap": true, 00:18:02.101 "write_zeroes": true, 00:18:02.101 "flush": true, 00:18:02.101 "reset": true, 00:18:02.101 "compare": false, 00:18:02.101 "compare_and_write": false, 00:18:02.101 "abort": false, 00:18:02.101 "nvme_admin": false, 00:18:02.101 "nvme_io": false 00:18:02.101 }, 00:18:02.101 "memory_domains": [ 00:18:02.101 { 00:18:02.101 "dma_device_id": "system", 00:18:02.101 "dma_device_type": 1 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.101 "dma_device_type": 2 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "dma_device_id": "system", 00:18:02.101 "dma_device_type": 1 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.101 "dma_device_type": 2 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "dma_device_id": "system", 00:18:02.101 "dma_device_type": 1 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.101 "dma_device_type": 2 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "dma_device_id": "system", 00:18:02.101 "dma_device_type": 1 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.101 "dma_device_type": 2 00:18:02.101 } 00:18:02.101 ], 00:18:02.101 "driver_specific": { 00:18:02.101 "raid": { 00:18:02.101 "uuid": "52ff1c82-b805-4719-9c7b-72ac17c24b65", 00:18:02.101 "strip_size_kb": 64, 00:18:02.101 "state": "online", 00:18:02.101 "raid_level": "raid0", 00:18:02.101 "superblock": false, 00:18:02.101 "num_base_bdevs": 4, 00:18:02.101 "num_base_bdevs_discovered": 4, 00:18:02.101 "num_base_bdevs_operational": 4, 00:18:02.101 "base_bdevs_list": [ 00:18:02.101 { 00:18:02.101 "name": "BaseBdev1", 00:18:02.101 "uuid": "abdf77da-643b-48e0-b47a-94ec496cae9e", 00:18:02.101 "is_configured": true, 00:18:02.101 "data_offset": 0, 00:18:02.101 "data_size": 65536 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "name": "BaseBdev2", 00:18:02.101 "uuid": "f847e7c5-34ce-4819-a54b-6d218cc3d7e6", 00:18:02.101 "is_configured": true, 00:18:02.101 "data_offset": 0, 00:18:02.101 "data_size": 65536 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "name": "BaseBdev3", 00:18:02.101 "uuid": "f4496e3c-4a63-4dd1-967e-681bda4107d4", 00:18:02.101 "is_configured": true, 00:18:02.101 "data_offset": 0, 00:18:02.101 "data_size": 65536 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "name": "BaseBdev4", 00:18:02.101 "uuid": "00be2468-ec18-473a-93b5-5a7f4f418342", 00:18:02.101 "is_configured": true, 00:18:02.101 "data_offset": 0, 00:18:02.101 "data_size": 65536 00:18:02.101 } 00:18:02.101 ] 00:18:02.101 } 00:18:02.101 } 00:18:02.101 }' 00:18:02.101 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.361 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:18:02.361 BaseBdev2 00:18:02.361 BaseBdev3 00:18:02.361 BaseBdev4' 00:18:02.361 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:02.361 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:02.361 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:02.620 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:02.620 "name": "BaseBdev1", 00:18:02.620 "aliases": [ 00:18:02.620 "abdf77da-643b-48e0-b47a-94ec496cae9e" 00:18:02.620 ], 00:18:02.620 "product_name": "Malloc disk", 00:18:02.620 "block_size": 512, 00:18:02.620 "num_blocks": 65536, 00:18:02.620 "uuid": "abdf77da-643b-48e0-b47a-94ec496cae9e", 00:18:02.620 "assigned_rate_limits": { 00:18:02.620 "rw_ios_per_sec": 0, 00:18:02.620 "rw_mbytes_per_sec": 0, 00:18:02.620 "r_mbytes_per_sec": 0, 00:18:02.620 "w_mbytes_per_sec": 0 00:18:02.620 }, 00:18:02.620 "claimed": true, 00:18:02.620 "claim_type": "exclusive_write", 00:18:02.620 "zoned": false, 00:18:02.620 "supported_io_types": { 00:18:02.620 "read": true, 00:18:02.620 "write": true, 00:18:02.620 "unmap": true, 00:18:02.620 "write_zeroes": true, 00:18:02.620 "flush": true, 00:18:02.620 "reset": true, 00:18:02.620 "compare": false, 00:18:02.620 "compare_and_write": false, 00:18:02.620 "abort": true, 00:18:02.620 "nvme_admin": false, 00:18:02.620 "nvme_io": false 00:18:02.620 }, 00:18:02.620 "memory_domains": [ 00:18:02.620 { 00:18:02.620 "dma_device_id": "system", 00:18:02.620 "dma_device_type": 1 00:18:02.620 }, 00:18:02.620 { 00:18:02.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.620 "dma_device_type": 2 00:18:02.620 } 00:18:02.620 ], 00:18:02.620 "driver_specific": {} 00:18:02.620 }' 00:18:02.620 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:02.620 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:02.620 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:02.620 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:02.620 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:02.879 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.879 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:02.879 23:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:02.879 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.879 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:02.879 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:02.879 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:02.879 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:02.879 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:02.879 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:03.138 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:03.138 "name": "BaseBdev2", 00:18:03.138 "aliases": [ 00:18:03.138 "f847e7c5-34ce-4819-a54b-6d218cc3d7e6" 00:18:03.138 ], 00:18:03.138 "product_name": "Malloc disk", 00:18:03.138 "block_size": 512, 00:18:03.138 "num_blocks": 65536, 00:18:03.138 "uuid": "f847e7c5-34ce-4819-a54b-6d218cc3d7e6", 00:18:03.138 "assigned_rate_limits": { 00:18:03.138 "rw_ios_per_sec": 0, 00:18:03.138 "rw_mbytes_per_sec": 0, 00:18:03.138 "r_mbytes_per_sec": 0, 00:18:03.138 "w_mbytes_per_sec": 0 00:18:03.138 }, 00:18:03.138 "claimed": true, 00:18:03.138 "claim_type": "exclusive_write", 00:18:03.138 "zoned": false, 00:18:03.138 "supported_io_types": { 00:18:03.138 "read": true, 00:18:03.138 "write": true, 00:18:03.138 "unmap": true, 00:18:03.138 "write_zeroes": true, 00:18:03.138 "flush": true, 00:18:03.138 "reset": true, 00:18:03.138 "compare": false, 00:18:03.138 "compare_and_write": false, 00:18:03.138 "abort": true, 00:18:03.138 "nvme_admin": false, 00:18:03.138 "nvme_io": false 00:18:03.138 }, 00:18:03.138 "memory_domains": [ 00:18:03.138 { 00:18:03.138 "dma_device_id": "system", 00:18:03.138 "dma_device_type": 1 00:18:03.138 }, 00:18:03.138 { 00:18:03.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.138 "dma_device_type": 2 00:18:03.138 } 00:18:03.138 ], 00:18:03.138 "driver_specific": {} 00:18:03.138 }' 00:18:03.138 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:03.138 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:03.398 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:03.398 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:03.398 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:03.398 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.398 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:03.398 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:03.398 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:03.398 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:03.656 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:03.656 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:03.656 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:03.656 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:03.656 23:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:03.915 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:03.915 "name": "BaseBdev3", 00:18:03.915 "aliases": [ 00:18:03.915 "f4496e3c-4a63-4dd1-967e-681bda4107d4" 00:18:03.915 ], 00:18:03.915 "product_name": "Malloc disk", 00:18:03.915 "block_size": 512, 00:18:03.915 "num_blocks": 65536, 00:18:03.915 "uuid": "f4496e3c-4a63-4dd1-967e-681bda4107d4", 00:18:03.915 "assigned_rate_limits": { 00:18:03.915 "rw_ios_per_sec": 0, 00:18:03.915 "rw_mbytes_per_sec": 0, 00:18:03.915 "r_mbytes_per_sec": 0, 00:18:03.915 "w_mbytes_per_sec": 0 00:18:03.915 }, 00:18:03.915 "claimed": true, 00:18:03.915 "claim_type": "exclusive_write", 00:18:03.915 "zoned": false, 00:18:03.915 "supported_io_types": { 00:18:03.915 "read": true, 00:18:03.915 "write": true, 00:18:03.915 "unmap": true, 00:18:03.915 "write_zeroes": true, 00:18:03.915 "flush": true, 00:18:03.915 "reset": true, 00:18:03.915 "compare": false, 00:18:03.915 "compare_and_write": false, 00:18:03.915 "abort": true, 00:18:03.915 "nvme_admin": false, 00:18:03.915 "nvme_io": false 00:18:03.915 }, 00:18:03.915 "memory_domains": [ 00:18:03.915 { 00:18:03.915 "dma_device_id": "system", 00:18:03.915 "dma_device_type": 1 00:18:03.915 }, 00:18:03.915 { 00:18:03.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.915 "dma_device_type": 2 00:18:03.915 } 00:18:03.915 ], 00:18:03.915 "driver_specific": {} 00:18:03.915 }' 00:18:03.915 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:03.915 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:04.173 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:04.173 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:04.173 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:04.173 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:04.173 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:04.173 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:04.433 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:04.433 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:04.433 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:04.433 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:04.433 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:04.433 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:04.433 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:04.692 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:04.692 "name": "BaseBdev4", 00:18:04.692 "aliases": [ 00:18:04.692 "00be2468-ec18-473a-93b5-5a7f4f418342" 00:18:04.692 ], 00:18:04.692 "product_name": "Malloc disk", 00:18:04.692 "block_size": 512, 00:18:04.692 "num_blocks": 65536, 00:18:04.692 "uuid": "00be2468-ec18-473a-93b5-5a7f4f418342", 00:18:04.692 "assigned_rate_limits": { 00:18:04.692 "rw_ios_per_sec": 0, 00:18:04.692 "rw_mbytes_per_sec": 0, 00:18:04.692 "r_mbytes_per_sec": 0, 00:18:04.692 "w_mbytes_per_sec": 0 00:18:04.692 }, 00:18:04.692 "claimed": true, 00:18:04.692 "claim_type": "exclusive_write", 00:18:04.692 "zoned": false, 00:18:04.692 "supported_io_types": { 00:18:04.692 "read": true, 00:18:04.692 "write": true, 00:18:04.692 "unmap": true, 00:18:04.692 "write_zeroes": true, 00:18:04.692 "flush": true, 00:18:04.692 "reset": true, 00:18:04.692 "compare": false, 00:18:04.692 "compare_and_write": false, 00:18:04.692 "abort": true, 00:18:04.692 "nvme_admin": false, 00:18:04.692 "nvme_io": false 00:18:04.692 }, 00:18:04.692 "memory_domains": [ 00:18:04.692 { 00:18:04.692 "dma_device_id": "system", 00:18:04.692 "dma_device_type": 1 00:18:04.692 }, 00:18:04.692 { 00:18:04.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.692 "dma_device_type": 2 00:18:04.692 } 00:18:04.692 ], 00:18:04.692 "driver_specific": {} 00:18:04.692 }' 00:18:04.692 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:04.692 23:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:04.951 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:04.951 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:04.951 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:04.951 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:04.951 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:04.951 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:05.210 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.210 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:05.210 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:05.210 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:05.210 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:05.469 [2024-05-14 23:33:28.599877] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:05.469 [2024-05-14 23:33:28.599918] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.469 [2024-05-14 23:33:28.599964] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.469 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.728 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.728 "name": "Existed_Raid", 00:18:05.728 "uuid": "52ff1c82-b805-4719-9c7b-72ac17c24b65", 00:18:05.728 "strip_size_kb": 64, 00:18:05.728 "state": "offline", 00:18:05.728 "raid_level": "raid0", 00:18:05.728 "superblock": false, 00:18:05.728 "num_base_bdevs": 4, 00:18:05.728 "num_base_bdevs_discovered": 3, 00:18:05.728 "num_base_bdevs_operational": 3, 00:18:05.728 "base_bdevs_list": [ 00:18:05.728 { 00:18:05.728 "name": null, 00:18:05.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.728 "is_configured": false, 00:18:05.728 "data_offset": 0, 00:18:05.728 "data_size": 65536 00:18:05.728 }, 00:18:05.728 { 00:18:05.728 "name": "BaseBdev2", 00:18:05.728 "uuid": "f847e7c5-34ce-4819-a54b-6d218cc3d7e6", 00:18:05.728 "is_configured": true, 00:18:05.728 "data_offset": 0, 00:18:05.728 "data_size": 65536 00:18:05.728 }, 00:18:05.728 { 00:18:05.728 "name": "BaseBdev3", 00:18:05.728 "uuid": "f4496e3c-4a63-4dd1-967e-681bda4107d4", 00:18:05.728 "is_configured": true, 00:18:05.728 "data_offset": 0, 00:18:05.728 "data_size": 65536 00:18:05.728 }, 00:18:05.728 { 00:18:05.728 "name": "BaseBdev4", 00:18:05.728 "uuid": "00be2468-ec18-473a-93b5-5a7f4f418342", 00:18:05.728 "is_configured": true, 00:18:05.728 "data_offset": 0, 00:18:05.728 "data_size": 65536 00:18:05.728 } 00:18:05.728 ] 00:18:05.728 }' 00:18:05.728 23:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.728 23:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.716 23:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:06.716 23:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:06.716 23:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:06.716 23:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.716 23:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:06.716 23:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:06.716 23:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:06.975 [2024-05-14 23:33:30.036108] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:06.975 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:06.975 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:06.975 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.975 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:07.233 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:07.233 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.233 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:07.492 [2024-05-14 23:33:30.539722] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:07.492 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:07.492 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:07.492 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.492 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:07.750 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:07.750 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.750 23:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:08.009 [2024-05-14 23:33:31.111698] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:08.009 [2024-05-14 23:33:31.111779] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:18:08.009 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:08.009 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:08.009 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.009 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:18:08.268 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:18:08.268 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:18:08.268 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:18:08.268 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:18:08.268 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:08.268 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:08.526 BaseBdev2 00:18:08.526 23:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:18:08.526 23:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:08.526 23:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:08.526 23:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:08.526 23:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:08.526 23:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:08.526 23:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.786 23:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:08.786 [ 00:18:08.786 { 00:18:08.786 "name": "BaseBdev2", 00:18:08.786 "aliases": [ 00:18:08.786 "a8a7b072-a14e-49cd-9d42-07f175ae1771" 00:18:08.786 ], 00:18:08.786 "product_name": "Malloc disk", 00:18:08.786 "block_size": 512, 00:18:08.786 "num_blocks": 65536, 00:18:08.786 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:08.786 "assigned_rate_limits": { 00:18:08.786 "rw_ios_per_sec": 0, 00:18:08.786 "rw_mbytes_per_sec": 0, 00:18:08.786 "r_mbytes_per_sec": 0, 00:18:08.786 "w_mbytes_per_sec": 0 00:18:08.786 }, 00:18:08.786 "claimed": false, 00:18:08.786 "zoned": false, 00:18:08.786 "supported_io_types": { 00:18:08.786 "read": true, 00:18:08.786 "write": true, 00:18:08.786 "unmap": true, 00:18:08.786 "write_zeroes": true, 00:18:08.786 "flush": true, 00:18:08.786 "reset": true, 00:18:08.786 "compare": false, 00:18:08.786 "compare_and_write": false, 00:18:08.786 "abort": true, 00:18:08.786 "nvme_admin": false, 00:18:08.786 "nvme_io": false 00:18:08.786 }, 00:18:08.786 "memory_domains": [ 00:18:08.786 { 00:18:08.786 "dma_device_id": "system", 00:18:08.786 "dma_device_type": 1 00:18:08.786 }, 00:18:08.786 { 00:18:08.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.786 "dma_device_type": 2 00:18:08.786 } 00:18:08.786 ], 00:18:08.786 "driver_specific": {} 00:18:08.786 } 00:18:08.786 ] 00:18:08.786 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:08.786 23:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:08.786 23:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:08.786 23:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:09.044 BaseBdev3 00:18:09.303 23:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:18:09.303 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:09.303 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:09.303 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:09.303 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:09.303 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:09.303 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:09.303 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:09.627 [ 00:18:09.628 { 00:18:09.628 "name": "BaseBdev3", 00:18:09.628 "aliases": [ 00:18:09.628 "c647768f-92a9-427f-b7f8-bc822b45fd95" 00:18:09.628 ], 00:18:09.628 "product_name": "Malloc disk", 00:18:09.628 "block_size": 512, 00:18:09.628 "num_blocks": 65536, 00:18:09.628 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:09.628 "assigned_rate_limits": { 00:18:09.628 "rw_ios_per_sec": 0, 00:18:09.628 "rw_mbytes_per_sec": 0, 00:18:09.628 "r_mbytes_per_sec": 0, 00:18:09.628 "w_mbytes_per_sec": 0 00:18:09.628 }, 00:18:09.628 "claimed": false, 00:18:09.628 "zoned": false, 00:18:09.628 "supported_io_types": { 00:18:09.628 "read": true, 00:18:09.628 "write": true, 00:18:09.628 "unmap": true, 00:18:09.628 "write_zeroes": true, 00:18:09.628 "flush": true, 00:18:09.628 "reset": true, 00:18:09.628 "compare": false, 00:18:09.628 "compare_and_write": false, 00:18:09.628 "abort": true, 00:18:09.628 "nvme_admin": false, 00:18:09.628 "nvme_io": false 00:18:09.628 }, 00:18:09.628 "memory_domains": [ 00:18:09.628 { 00:18:09.628 "dma_device_id": "system", 00:18:09.628 "dma_device_type": 1 00:18:09.628 }, 00:18:09.628 { 00:18:09.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.628 "dma_device_type": 2 00:18:09.628 } 00:18:09.628 ], 00:18:09.628 "driver_specific": {} 00:18:09.628 } 00:18:09.628 ] 00:18:09.628 23:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:09.628 23:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:09.628 23:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:09.628 23:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:09.886 BaseBdev4 00:18:09.886 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:18:09.886 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:09.886 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:09.886 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:09.886 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:09.886 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:09.886 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.146 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:10.404 [ 00:18:10.404 { 00:18:10.404 "name": "BaseBdev4", 00:18:10.404 "aliases": [ 00:18:10.404 "ef4e4b74-fce2-45e9-a154-0001d6221305" 00:18:10.404 ], 00:18:10.404 "product_name": "Malloc disk", 00:18:10.404 "block_size": 512, 00:18:10.404 "num_blocks": 65536, 00:18:10.404 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:10.404 "assigned_rate_limits": { 00:18:10.404 "rw_ios_per_sec": 0, 00:18:10.404 "rw_mbytes_per_sec": 0, 00:18:10.404 "r_mbytes_per_sec": 0, 00:18:10.404 "w_mbytes_per_sec": 0 00:18:10.404 }, 00:18:10.404 "claimed": false, 00:18:10.404 "zoned": false, 00:18:10.404 "supported_io_types": { 00:18:10.404 "read": true, 00:18:10.404 "write": true, 00:18:10.404 "unmap": true, 00:18:10.404 "write_zeroes": true, 00:18:10.404 "flush": true, 00:18:10.404 "reset": true, 00:18:10.404 "compare": false, 00:18:10.404 "compare_and_write": false, 00:18:10.404 "abort": true, 00:18:10.404 "nvme_admin": false, 00:18:10.404 "nvme_io": false 00:18:10.404 }, 00:18:10.404 "memory_domains": [ 00:18:10.404 { 00:18:10.404 "dma_device_id": "system", 00:18:10.404 "dma_device_type": 1 00:18:10.404 }, 00:18:10.404 { 00:18:10.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.404 "dma_device_type": 2 00:18:10.404 } 00:18:10.404 ], 00:18:10.404 "driver_specific": {} 00:18:10.404 } 00:18:10.404 ] 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:10.404 [2024-05-14 23:33:33.645837] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:10.404 [2024-05-14 23:33:33.645927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:10.404 [2024-05-14 23:33:33.645963] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.404 [2024-05-14 23:33:33.647748] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.404 [2024-05-14 23:33:33.647804] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.404 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.662 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.662 "name": "Existed_Raid", 00:18:10.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.662 "strip_size_kb": 64, 00:18:10.662 "state": "configuring", 00:18:10.662 "raid_level": "raid0", 00:18:10.663 "superblock": false, 00:18:10.663 "num_base_bdevs": 4, 00:18:10.663 "num_base_bdevs_discovered": 3, 00:18:10.663 "num_base_bdevs_operational": 4, 00:18:10.663 "base_bdevs_list": [ 00:18:10.663 { 00:18:10.663 "name": "BaseBdev1", 00:18:10.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.663 "is_configured": false, 00:18:10.663 "data_offset": 0, 00:18:10.663 "data_size": 0 00:18:10.663 }, 00:18:10.663 { 00:18:10.663 "name": "BaseBdev2", 00:18:10.663 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:10.663 "is_configured": true, 00:18:10.663 "data_offset": 0, 00:18:10.663 "data_size": 65536 00:18:10.663 }, 00:18:10.663 { 00:18:10.663 "name": "BaseBdev3", 00:18:10.663 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:10.663 "is_configured": true, 00:18:10.663 "data_offset": 0, 00:18:10.663 "data_size": 65536 00:18:10.663 }, 00:18:10.663 { 00:18:10.663 "name": "BaseBdev4", 00:18:10.663 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:10.663 "is_configured": true, 00:18:10.663 "data_offset": 0, 00:18:10.663 "data_size": 65536 00:18:10.663 } 00:18:10.663 ] 00:18:10.663 }' 00:18:10.663 23:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.663 23:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:11.600 [2024-05-14 23:33:34.773990] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.600 23:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.859 23:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.859 "name": "Existed_Raid", 00:18:11.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.859 "strip_size_kb": 64, 00:18:11.859 "state": "configuring", 00:18:11.859 "raid_level": "raid0", 00:18:11.859 "superblock": false, 00:18:11.859 "num_base_bdevs": 4, 00:18:11.859 "num_base_bdevs_discovered": 2, 00:18:11.859 "num_base_bdevs_operational": 4, 00:18:11.859 "base_bdevs_list": [ 00:18:11.859 { 00:18:11.859 "name": "BaseBdev1", 00:18:11.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.859 "is_configured": false, 00:18:11.859 "data_offset": 0, 00:18:11.859 "data_size": 0 00:18:11.859 }, 00:18:11.859 { 00:18:11.859 "name": null, 00:18:11.859 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:11.859 "is_configured": false, 00:18:11.859 "data_offset": 0, 00:18:11.859 "data_size": 65536 00:18:11.859 }, 00:18:11.859 { 00:18:11.859 "name": "BaseBdev3", 00:18:11.859 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:11.859 "is_configured": true, 00:18:11.859 "data_offset": 0, 00:18:11.859 "data_size": 65536 00:18:11.859 }, 00:18:11.859 { 00:18:11.859 "name": "BaseBdev4", 00:18:11.859 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:11.859 "is_configured": true, 00:18:11.859 "data_offset": 0, 00:18:11.859 "data_size": 65536 00:18:11.859 } 00:18:11.859 ] 00:18:11.859 }' 00:18:11.859 23:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.859 23:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.427 23:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.427 23:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:12.686 23:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:18:12.686 23:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:12.945 [2024-05-14 23:33:36.122466] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.945 BaseBdev1 00:18:12.945 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:18:12.945 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:12.945 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:12.945 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:12.945 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:12.945 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:12.945 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.205 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:13.488 [ 00:18:13.488 { 00:18:13.488 "name": "BaseBdev1", 00:18:13.488 "aliases": [ 00:18:13.488 "01cbb44a-2e09-404d-93d0-9a2789306dab" 00:18:13.488 ], 00:18:13.488 "product_name": "Malloc disk", 00:18:13.488 "block_size": 512, 00:18:13.488 "num_blocks": 65536, 00:18:13.488 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:13.488 "assigned_rate_limits": { 00:18:13.488 "rw_ios_per_sec": 0, 00:18:13.488 "rw_mbytes_per_sec": 0, 00:18:13.488 "r_mbytes_per_sec": 0, 00:18:13.488 "w_mbytes_per_sec": 0 00:18:13.488 }, 00:18:13.488 "claimed": true, 00:18:13.488 "claim_type": "exclusive_write", 00:18:13.488 "zoned": false, 00:18:13.488 "supported_io_types": { 00:18:13.488 "read": true, 00:18:13.488 "write": true, 00:18:13.488 "unmap": true, 00:18:13.488 "write_zeroes": true, 00:18:13.488 "flush": true, 00:18:13.488 "reset": true, 00:18:13.488 "compare": false, 00:18:13.488 "compare_and_write": false, 00:18:13.488 "abort": true, 00:18:13.488 "nvme_admin": false, 00:18:13.488 "nvme_io": false 00:18:13.488 }, 00:18:13.488 "memory_domains": [ 00:18:13.488 { 00:18:13.488 "dma_device_id": "system", 00:18:13.488 "dma_device_type": 1 00:18:13.488 }, 00:18:13.488 { 00:18:13.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.488 "dma_device_type": 2 00:18:13.488 } 00:18:13.488 ], 00:18:13.488 "driver_specific": {} 00:18:13.488 } 00:18:13.488 ] 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.488 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.748 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.748 "name": "Existed_Raid", 00:18:13.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.748 "strip_size_kb": 64, 00:18:13.748 "state": "configuring", 00:18:13.748 "raid_level": "raid0", 00:18:13.748 "superblock": false, 00:18:13.748 "num_base_bdevs": 4, 00:18:13.748 "num_base_bdevs_discovered": 3, 00:18:13.748 "num_base_bdevs_operational": 4, 00:18:13.748 "base_bdevs_list": [ 00:18:13.748 { 00:18:13.748 "name": "BaseBdev1", 00:18:13.748 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:13.748 "is_configured": true, 00:18:13.748 "data_offset": 0, 00:18:13.748 "data_size": 65536 00:18:13.748 }, 00:18:13.748 { 00:18:13.748 "name": null, 00:18:13.748 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:13.748 "is_configured": false, 00:18:13.748 "data_offset": 0, 00:18:13.748 "data_size": 65536 00:18:13.748 }, 00:18:13.748 { 00:18:13.748 "name": "BaseBdev3", 00:18:13.748 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:13.748 "is_configured": true, 00:18:13.748 "data_offset": 0, 00:18:13.748 "data_size": 65536 00:18:13.748 }, 00:18:13.748 { 00:18:13.748 "name": "BaseBdev4", 00:18:13.748 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:13.748 "is_configured": true, 00:18:13.748 "data_offset": 0, 00:18:13.748 "data_size": 65536 00:18:13.748 } 00:18:13.748 ] 00:18:13.748 }' 00:18:13.748 23:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.748 23:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.316 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.316 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:14.575 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:14.575 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:14.833 [2024-05-14 23:33:37.870837] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:14.833 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:14.833 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.833 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:14.833 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:14.833 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.834 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:14.834 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.834 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.834 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.834 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.834 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.834 23:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.092 23:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.092 "name": "Existed_Raid", 00:18:15.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.092 "strip_size_kb": 64, 00:18:15.092 "state": "configuring", 00:18:15.092 "raid_level": "raid0", 00:18:15.092 "superblock": false, 00:18:15.092 "num_base_bdevs": 4, 00:18:15.092 "num_base_bdevs_discovered": 2, 00:18:15.092 "num_base_bdevs_operational": 4, 00:18:15.092 "base_bdevs_list": [ 00:18:15.092 { 00:18:15.092 "name": "BaseBdev1", 00:18:15.092 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:15.092 "is_configured": true, 00:18:15.092 "data_offset": 0, 00:18:15.092 "data_size": 65536 00:18:15.092 }, 00:18:15.092 { 00:18:15.092 "name": null, 00:18:15.092 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:15.092 "is_configured": false, 00:18:15.092 "data_offset": 0, 00:18:15.092 "data_size": 65536 00:18:15.092 }, 00:18:15.092 { 00:18:15.092 "name": null, 00:18:15.092 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:15.092 "is_configured": false, 00:18:15.092 "data_offset": 0, 00:18:15.092 "data_size": 65536 00:18:15.092 }, 00:18:15.092 { 00:18:15.092 "name": "BaseBdev4", 00:18:15.092 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:15.092 "is_configured": true, 00:18:15.092 "data_offset": 0, 00:18:15.092 "data_size": 65536 00:18:15.092 } 00:18:15.092 ] 00:18:15.092 }' 00:18:15.092 23:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.092 23:33:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.661 23:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:15.661 23:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.920 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:18:15.920 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:15.920 [2024-05-14 23:33:39.191100] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.178 "name": "Existed_Raid", 00:18:16.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.178 "strip_size_kb": 64, 00:18:16.178 "state": "configuring", 00:18:16.178 "raid_level": "raid0", 00:18:16.178 "superblock": false, 00:18:16.178 "num_base_bdevs": 4, 00:18:16.178 "num_base_bdevs_discovered": 3, 00:18:16.178 "num_base_bdevs_operational": 4, 00:18:16.178 "base_bdevs_list": [ 00:18:16.178 { 00:18:16.178 "name": "BaseBdev1", 00:18:16.178 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:16.178 "is_configured": true, 00:18:16.178 "data_offset": 0, 00:18:16.178 "data_size": 65536 00:18:16.178 }, 00:18:16.178 { 00:18:16.178 "name": null, 00:18:16.178 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:16.178 "is_configured": false, 00:18:16.178 "data_offset": 0, 00:18:16.178 "data_size": 65536 00:18:16.178 }, 00:18:16.178 { 00:18:16.178 "name": "BaseBdev3", 00:18:16.178 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:16.178 "is_configured": true, 00:18:16.178 "data_offset": 0, 00:18:16.178 "data_size": 65536 00:18:16.178 }, 00:18:16.178 { 00:18:16.178 "name": "BaseBdev4", 00:18:16.178 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:16.178 "is_configured": true, 00:18:16.178 "data_offset": 0, 00:18:16.178 "data_size": 65536 00:18:16.178 } 00:18:16.178 ] 00:18:16.178 }' 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.178 23:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.110 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.110 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:17.110 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:18:17.110 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:17.368 [2024-05-14 23:33:40.595275] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.637 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.901 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.901 "name": "Existed_Raid", 00:18:17.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.901 "strip_size_kb": 64, 00:18:17.901 "state": "configuring", 00:18:17.901 "raid_level": "raid0", 00:18:17.901 "superblock": false, 00:18:17.901 "num_base_bdevs": 4, 00:18:17.901 "num_base_bdevs_discovered": 2, 00:18:17.901 "num_base_bdevs_operational": 4, 00:18:17.901 "base_bdevs_list": [ 00:18:17.901 { 00:18:17.901 "name": null, 00:18:17.901 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:17.901 "is_configured": false, 00:18:17.901 "data_offset": 0, 00:18:17.901 "data_size": 65536 00:18:17.901 }, 00:18:17.901 { 00:18:17.901 "name": null, 00:18:17.901 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:17.901 "is_configured": false, 00:18:17.901 "data_offset": 0, 00:18:17.901 "data_size": 65536 00:18:17.901 }, 00:18:17.901 { 00:18:17.901 "name": "BaseBdev3", 00:18:17.901 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:17.901 "is_configured": true, 00:18:17.901 "data_offset": 0, 00:18:17.901 "data_size": 65536 00:18:17.901 }, 00:18:17.901 { 00:18:17.901 "name": "BaseBdev4", 00:18:17.901 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:17.901 "is_configured": true, 00:18:17.901 "data_offset": 0, 00:18:17.901 "data_size": 65536 00:18:17.901 } 00:18:17.901 ] 00:18:17.901 }' 00:18:17.901 23:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.901 23:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.466 23:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.467 23:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:18.724 23:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:18:18.724 23:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:18.983 [2024-05-14 23:33:42.149277] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.983 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.242 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.242 "name": "Existed_Raid", 00:18:19.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.242 "strip_size_kb": 64, 00:18:19.242 "state": "configuring", 00:18:19.242 "raid_level": "raid0", 00:18:19.242 "superblock": false, 00:18:19.242 "num_base_bdevs": 4, 00:18:19.242 "num_base_bdevs_discovered": 3, 00:18:19.242 "num_base_bdevs_operational": 4, 00:18:19.242 "base_bdevs_list": [ 00:18:19.242 { 00:18:19.242 "name": null, 00:18:19.242 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:19.242 "is_configured": false, 00:18:19.242 "data_offset": 0, 00:18:19.242 "data_size": 65536 00:18:19.242 }, 00:18:19.242 { 00:18:19.242 "name": "BaseBdev2", 00:18:19.242 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:19.242 "is_configured": true, 00:18:19.242 "data_offset": 0, 00:18:19.242 "data_size": 65536 00:18:19.242 }, 00:18:19.242 { 00:18:19.242 "name": "BaseBdev3", 00:18:19.242 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:19.242 "is_configured": true, 00:18:19.242 "data_offset": 0, 00:18:19.242 "data_size": 65536 00:18:19.242 }, 00:18:19.242 { 00:18:19.242 "name": "BaseBdev4", 00:18:19.242 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:19.242 "is_configured": true, 00:18:19.242 "data_offset": 0, 00:18:19.242 "data_size": 65536 00:18:19.242 } 00:18:19.242 ] 00:18:19.242 }' 00:18:19.242 23:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.242 23:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.809 23:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.809 23:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:20.067 23:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:18:20.067 23:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.067 23:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:20.327 23:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 01cbb44a-2e09-404d-93d0-9a2789306dab 00:18:20.586 [2024-05-14 23:33:43.636852] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:20.586 [2024-05-14 23:33:43.636898] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:18:20.586 [2024-05-14 23:33:43.636909] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:20.586 [2024-05-14 23:33:43.637025] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:20.586 NewBaseBdev 00:18:20.586 [2024-05-14 23:33:43.637447] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:18:20.586 [2024-05-14 23:33:43.637470] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:18:20.586 [2024-05-14 23:33:43.637656] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.586 23:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:18:20.586 23:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:18:20.586 23:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:20.586 23:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:20.586 23:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:20.586 23:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:20.586 23:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.586 23:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:20.844 [ 00:18:20.844 { 00:18:20.844 "name": "NewBaseBdev", 00:18:20.844 "aliases": [ 00:18:20.844 "01cbb44a-2e09-404d-93d0-9a2789306dab" 00:18:20.844 ], 00:18:20.844 "product_name": "Malloc disk", 00:18:20.844 "block_size": 512, 00:18:20.844 "num_blocks": 65536, 00:18:20.844 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:20.844 "assigned_rate_limits": { 00:18:20.844 "rw_ios_per_sec": 0, 00:18:20.844 "rw_mbytes_per_sec": 0, 00:18:20.844 "r_mbytes_per_sec": 0, 00:18:20.844 "w_mbytes_per_sec": 0 00:18:20.844 }, 00:18:20.844 "claimed": true, 00:18:20.844 "claim_type": "exclusive_write", 00:18:20.844 "zoned": false, 00:18:20.844 "supported_io_types": { 00:18:20.844 "read": true, 00:18:20.844 "write": true, 00:18:20.844 "unmap": true, 00:18:20.845 "write_zeroes": true, 00:18:20.845 "flush": true, 00:18:20.845 "reset": true, 00:18:20.845 "compare": false, 00:18:20.845 "compare_and_write": false, 00:18:20.845 "abort": true, 00:18:20.845 "nvme_admin": false, 00:18:20.845 "nvme_io": false 00:18:20.845 }, 00:18:20.845 "memory_domains": [ 00:18:20.845 { 00:18:20.845 "dma_device_id": "system", 00:18:20.845 "dma_device_type": 1 00:18:20.845 }, 00:18:20.845 { 00:18:20.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.845 "dma_device_type": 2 00:18:20.845 } 00:18:20.845 ], 00:18:20.845 "driver_specific": {} 00:18:20.845 } 00:18:20.845 ] 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.845 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.103 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.103 "name": "Existed_Raid", 00:18:21.103 "uuid": "bc8270c2-c5cc-4376-9ea6-71fd6991338f", 00:18:21.103 "strip_size_kb": 64, 00:18:21.103 "state": "online", 00:18:21.103 "raid_level": "raid0", 00:18:21.103 "superblock": false, 00:18:21.103 "num_base_bdevs": 4, 00:18:21.103 "num_base_bdevs_discovered": 4, 00:18:21.103 "num_base_bdevs_operational": 4, 00:18:21.103 "base_bdevs_list": [ 00:18:21.103 { 00:18:21.103 "name": "NewBaseBdev", 00:18:21.103 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:21.103 "is_configured": true, 00:18:21.103 "data_offset": 0, 00:18:21.103 "data_size": 65536 00:18:21.103 }, 00:18:21.103 { 00:18:21.103 "name": "BaseBdev2", 00:18:21.103 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:21.103 "is_configured": true, 00:18:21.103 "data_offset": 0, 00:18:21.103 "data_size": 65536 00:18:21.103 }, 00:18:21.103 { 00:18:21.103 "name": "BaseBdev3", 00:18:21.103 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:21.103 "is_configured": true, 00:18:21.103 "data_offset": 0, 00:18:21.103 "data_size": 65536 00:18:21.103 }, 00:18:21.103 { 00:18:21.103 "name": "BaseBdev4", 00:18:21.103 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:21.103 "is_configured": true, 00:18:21.103 "data_offset": 0, 00:18:21.103 "data_size": 65536 00:18:21.103 } 00:18:21.103 ] 00:18:21.103 }' 00:18:21.103 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.103 23:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.703 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:18:21.703 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:21.703 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:21.703 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:21.703 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:21.703 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:18:21.703 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:21.703 23:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:21.962 [2024-05-14 23:33:45.097305] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.962 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:21.962 "name": "Existed_Raid", 00:18:21.962 "aliases": [ 00:18:21.962 "bc8270c2-c5cc-4376-9ea6-71fd6991338f" 00:18:21.962 ], 00:18:21.962 "product_name": "Raid Volume", 00:18:21.962 "block_size": 512, 00:18:21.962 "num_blocks": 262144, 00:18:21.962 "uuid": "bc8270c2-c5cc-4376-9ea6-71fd6991338f", 00:18:21.962 "assigned_rate_limits": { 00:18:21.962 "rw_ios_per_sec": 0, 00:18:21.962 "rw_mbytes_per_sec": 0, 00:18:21.962 "r_mbytes_per_sec": 0, 00:18:21.962 "w_mbytes_per_sec": 0 00:18:21.962 }, 00:18:21.962 "claimed": false, 00:18:21.962 "zoned": false, 00:18:21.962 "supported_io_types": { 00:18:21.962 "read": true, 00:18:21.962 "write": true, 00:18:21.962 "unmap": true, 00:18:21.962 "write_zeroes": true, 00:18:21.962 "flush": true, 00:18:21.962 "reset": true, 00:18:21.962 "compare": false, 00:18:21.962 "compare_and_write": false, 00:18:21.962 "abort": false, 00:18:21.962 "nvme_admin": false, 00:18:21.962 "nvme_io": false 00:18:21.962 }, 00:18:21.962 "memory_domains": [ 00:18:21.962 { 00:18:21.962 "dma_device_id": "system", 00:18:21.962 "dma_device_type": 1 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.962 "dma_device_type": 2 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "dma_device_id": "system", 00:18:21.962 "dma_device_type": 1 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.962 "dma_device_type": 2 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "dma_device_id": "system", 00:18:21.962 "dma_device_type": 1 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.962 "dma_device_type": 2 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "dma_device_id": "system", 00:18:21.962 "dma_device_type": 1 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.962 "dma_device_type": 2 00:18:21.962 } 00:18:21.962 ], 00:18:21.962 "driver_specific": { 00:18:21.962 "raid": { 00:18:21.962 "uuid": "bc8270c2-c5cc-4376-9ea6-71fd6991338f", 00:18:21.962 "strip_size_kb": 64, 00:18:21.962 "state": "online", 00:18:21.962 "raid_level": "raid0", 00:18:21.962 "superblock": false, 00:18:21.962 "num_base_bdevs": 4, 00:18:21.962 "num_base_bdevs_discovered": 4, 00:18:21.962 "num_base_bdevs_operational": 4, 00:18:21.962 "base_bdevs_list": [ 00:18:21.962 { 00:18:21.962 "name": "NewBaseBdev", 00:18:21.962 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:21.962 "is_configured": true, 00:18:21.962 "data_offset": 0, 00:18:21.962 "data_size": 65536 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "name": "BaseBdev2", 00:18:21.962 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:21.962 "is_configured": true, 00:18:21.962 "data_offset": 0, 00:18:21.962 "data_size": 65536 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "name": "BaseBdev3", 00:18:21.962 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:21.962 "is_configured": true, 00:18:21.962 "data_offset": 0, 00:18:21.962 "data_size": 65536 00:18:21.962 }, 00:18:21.962 { 00:18:21.962 "name": "BaseBdev4", 00:18:21.962 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:21.962 "is_configured": true, 00:18:21.962 "data_offset": 0, 00:18:21.962 "data_size": 65536 00:18:21.962 } 00:18:21.962 ] 00:18:21.962 } 00:18:21.962 } 00:18:21.962 }' 00:18:21.962 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.962 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:18:21.962 BaseBdev2 00:18:21.962 BaseBdev3 00:18:21.962 BaseBdev4' 00:18:21.963 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:21.963 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:21.963 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:22.222 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:22.222 "name": "NewBaseBdev", 00:18:22.222 "aliases": [ 00:18:22.222 "01cbb44a-2e09-404d-93d0-9a2789306dab" 00:18:22.222 ], 00:18:22.222 "product_name": "Malloc disk", 00:18:22.222 "block_size": 512, 00:18:22.222 "num_blocks": 65536, 00:18:22.222 "uuid": "01cbb44a-2e09-404d-93d0-9a2789306dab", 00:18:22.222 "assigned_rate_limits": { 00:18:22.222 "rw_ios_per_sec": 0, 00:18:22.222 "rw_mbytes_per_sec": 0, 00:18:22.222 "r_mbytes_per_sec": 0, 00:18:22.222 "w_mbytes_per_sec": 0 00:18:22.222 }, 00:18:22.222 "claimed": true, 00:18:22.222 "claim_type": "exclusive_write", 00:18:22.222 "zoned": false, 00:18:22.222 "supported_io_types": { 00:18:22.222 "read": true, 00:18:22.222 "write": true, 00:18:22.222 "unmap": true, 00:18:22.222 "write_zeroes": true, 00:18:22.222 "flush": true, 00:18:22.222 "reset": true, 00:18:22.222 "compare": false, 00:18:22.222 "compare_and_write": false, 00:18:22.222 "abort": true, 00:18:22.222 "nvme_admin": false, 00:18:22.222 "nvme_io": false 00:18:22.222 }, 00:18:22.222 "memory_domains": [ 00:18:22.222 { 00:18:22.222 "dma_device_id": "system", 00:18:22.222 "dma_device_type": 1 00:18:22.222 }, 00:18:22.222 { 00:18:22.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.222 "dma_device_type": 2 00:18:22.222 } 00:18:22.222 ], 00:18:22.222 "driver_specific": {} 00:18:22.222 }' 00:18:22.222 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:22.222 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:22.222 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:22.222 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:22.481 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:22.481 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:22.481 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:22.481 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:22.481 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:22.481 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:22.481 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:22.740 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:22.740 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:22.740 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:22.740 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:22.740 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:22.740 "name": "BaseBdev2", 00:18:22.740 "aliases": [ 00:18:22.740 "a8a7b072-a14e-49cd-9d42-07f175ae1771" 00:18:22.740 ], 00:18:22.740 "product_name": "Malloc disk", 00:18:22.740 "block_size": 512, 00:18:22.740 "num_blocks": 65536, 00:18:22.740 "uuid": "a8a7b072-a14e-49cd-9d42-07f175ae1771", 00:18:22.740 "assigned_rate_limits": { 00:18:22.740 "rw_ios_per_sec": 0, 00:18:22.740 "rw_mbytes_per_sec": 0, 00:18:22.740 "r_mbytes_per_sec": 0, 00:18:22.740 "w_mbytes_per_sec": 0 00:18:22.740 }, 00:18:22.740 "claimed": true, 00:18:22.740 "claim_type": "exclusive_write", 00:18:22.740 "zoned": false, 00:18:22.740 "supported_io_types": { 00:18:22.740 "read": true, 00:18:22.740 "write": true, 00:18:22.740 "unmap": true, 00:18:22.740 "write_zeroes": true, 00:18:22.740 "flush": true, 00:18:22.740 "reset": true, 00:18:22.740 "compare": false, 00:18:22.740 "compare_and_write": false, 00:18:22.740 "abort": true, 00:18:22.740 "nvme_admin": false, 00:18:22.740 "nvme_io": false 00:18:22.740 }, 00:18:22.740 "memory_domains": [ 00:18:22.740 { 00:18:22.740 "dma_device_id": "system", 00:18:22.740 "dma_device_type": 1 00:18:22.740 }, 00:18:22.740 { 00:18:22.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.740 "dma_device_type": 2 00:18:22.740 } 00:18:22.740 ], 00:18:22.740 "driver_specific": {} 00:18:22.740 }' 00:18:22.740 23:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:22.999 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:23.258 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:23.258 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:23.258 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:23.258 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:23.258 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:23.516 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:23.516 "name": "BaseBdev3", 00:18:23.516 "aliases": [ 00:18:23.516 "c647768f-92a9-427f-b7f8-bc822b45fd95" 00:18:23.516 ], 00:18:23.516 "product_name": "Malloc disk", 00:18:23.516 "block_size": 512, 00:18:23.516 "num_blocks": 65536, 00:18:23.516 "uuid": "c647768f-92a9-427f-b7f8-bc822b45fd95", 00:18:23.517 "assigned_rate_limits": { 00:18:23.517 "rw_ios_per_sec": 0, 00:18:23.517 "rw_mbytes_per_sec": 0, 00:18:23.517 "r_mbytes_per_sec": 0, 00:18:23.517 "w_mbytes_per_sec": 0 00:18:23.517 }, 00:18:23.517 "claimed": true, 00:18:23.517 "claim_type": "exclusive_write", 00:18:23.517 "zoned": false, 00:18:23.517 "supported_io_types": { 00:18:23.517 "read": true, 00:18:23.517 "write": true, 00:18:23.517 "unmap": true, 00:18:23.517 "write_zeroes": true, 00:18:23.517 "flush": true, 00:18:23.517 "reset": true, 00:18:23.517 "compare": false, 00:18:23.517 "compare_and_write": false, 00:18:23.517 "abort": true, 00:18:23.517 "nvme_admin": false, 00:18:23.517 "nvme_io": false 00:18:23.517 }, 00:18:23.517 "memory_domains": [ 00:18:23.517 { 00:18:23.517 "dma_device_id": "system", 00:18:23.517 "dma_device_type": 1 00:18:23.517 }, 00:18:23.517 { 00:18:23.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.517 "dma_device_type": 2 00:18:23.517 } 00:18:23.517 ], 00:18:23.517 "driver_specific": {} 00:18:23.517 }' 00:18:23.517 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:23.517 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:23.517 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:23.517 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:23.517 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:23.776 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:23.776 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:23.776 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:23.776 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:23.776 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:23.776 23:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:23.776 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:23.776 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:23.776 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:23.776 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:24.034 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:24.034 "name": "BaseBdev4", 00:18:24.034 "aliases": [ 00:18:24.034 "ef4e4b74-fce2-45e9-a154-0001d6221305" 00:18:24.034 ], 00:18:24.034 "product_name": "Malloc disk", 00:18:24.034 "block_size": 512, 00:18:24.034 "num_blocks": 65536, 00:18:24.034 "uuid": "ef4e4b74-fce2-45e9-a154-0001d6221305", 00:18:24.034 "assigned_rate_limits": { 00:18:24.034 "rw_ios_per_sec": 0, 00:18:24.034 "rw_mbytes_per_sec": 0, 00:18:24.034 "r_mbytes_per_sec": 0, 00:18:24.034 "w_mbytes_per_sec": 0 00:18:24.034 }, 00:18:24.034 "claimed": true, 00:18:24.034 "claim_type": "exclusive_write", 00:18:24.034 "zoned": false, 00:18:24.034 "supported_io_types": { 00:18:24.034 "read": true, 00:18:24.034 "write": true, 00:18:24.034 "unmap": true, 00:18:24.034 "write_zeroes": true, 00:18:24.034 "flush": true, 00:18:24.034 "reset": true, 00:18:24.034 "compare": false, 00:18:24.034 "compare_and_write": false, 00:18:24.034 "abort": true, 00:18:24.034 "nvme_admin": false, 00:18:24.034 "nvme_io": false 00:18:24.034 }, 00:18:24.034 "memory_domains": [ 00:18:24.034 { 00:18:24.034 "dma_device_id": "system", 00:18:24.034 "dma_device_type": 1 00:18:24.034 }, 00:18:24.034 { 00:18:24.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.034 "dma_device_type": 2 00:18:24.034 } 00:18:24.034 ], 00:18:24.034 "driver_specific": {} 00:18:24.034 }' 00:18:24.034 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:24.034 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:24.292 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:24.292 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:24.292 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:24.292 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:24.292 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:24.292 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:24.550 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:24.550 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:24.550 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:24.550 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:24.550 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:24.808 [2024-05-14 23:33:47.857454] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.808 [2024-05-14 23:33:47.857499] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.808 [2024-05-14 23:33:47.857573] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.808 [2024-05-14 23:33:47.857618] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.808 [2024-05-14 23:33:47.857629] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 64203 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 64203 ']' 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 64203 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64203 00:18:24.808 killing process with pid 64203 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64203' 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 64203 00:18:24.808 23:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 64203 00:18:24.808 [2024-05-14 23:33:47.889716] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.067 [2024-05-14 23:33:48.216790] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:18:26.444 00:18:26.444 real 0m34.923s 00:18:26.444 user 1m5.837s 00:18:26.444 sys 0m3.482s 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:26.444 ************************************ 00:18:26.444 END TEST raid_state_function_test 00:18:26.444 ************************************ 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.444 23:33:49 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:26.444 23:33:49 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:26.444 23:33:49 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:26.444 23:33:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.444 ************************************ 00:18:26.444 START TEST raid_state_function_test_sb 00:18:26.444 ************************************ 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:18:26.444 Process raid pid: 65316 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=65316 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 65316' 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 65316 /var/tmp/spdk-raid.sock 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 65316 ']' 00:18:26.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:26.444 23:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.444 [2024-05-14 23:33:49.631719] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:18:26.444 [2024-05-14 23:33:49.631928] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.705 [2024-05-14 23:33:49.792369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.962 [2024-05-14 23:33:50.033750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.962 [2024-05-14 23:33:50.231487] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.221 23:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:27.221 23:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:18:27.221 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:27.479 [2024-05-14 23:33:50.644240] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.479 [2024-05-14 23:33:50.644314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.479 [2024-05-14 23:33:50.644329] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.480 [2024-05-14 23:33:50.644350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.480 [2024-05-14 23:33:50.644359] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.480 [2024-05-14 23:33:50.644408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.480 [2024-05-14 23:33:50.644420] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:27.480 [2024-05-14 23:33:50.644443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.480 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.738 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.738 "name": "Existed_Raid", 00:18:27.738 "uuid": "e6f85757-6e39-451d-a7db-f59fe826deb2", 00:18:27.738 "strip_size_kb": 64, 00:18:27.738 "state": "configuring", 00:18:27.738 "raid_level": "raid0", 00:18:27.738 "superblock": true, 00:18:27.738 "num_base_bdevs": 4, 00:18:27.738 "num_base_bdevs_discovered": 0, 00:18:27.738 "num_base_bdevs_operational": 4, 00:18:27.738 "base_bdevs_list": [ 00:18:27.738 { 00:18:27.738 "name": "BaseBdev1", 00:18:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.738 "is_configured": false, 00:18:27.738 "data_offset": 0, 00:18:27.738 "data_size": 0 00:18:27.738 }, 00:18:27.738 { 00:18:27.738 "name": "BaseBdev2", 00:18:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.738 "is_configured": false, 00:18:27.738 "data_offset": 0, 00:18:27.738 "data_size": 0 00:18:27.738 }, 00:18:27.738 { 00:18:27.738 "name": "BaseBdev3", 00:18:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.739 "is_configured": false, 00:18:27.739 "data_offset": 0, 00:18:27.739 "data_size": 0 00:18:27.739 }, 00:18:27.739 { 00:18:27.739 "name": "BaseBdev4", 00:18:27.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.739 "is_configured": false, 00:18:27.739 "data_offset": 0, 00:18:27.739 "data_size": 0 00:18:27.739 } 00:18:27.739 ] 00:18:27.739 }' 00:18:27.739 23:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.739 23:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.305 23:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:28.564 [2024-05-14 23:33:51.680200] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.564 [2024-05-14 23:33:51.680253] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:18:28.564 23:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:28.822 [2024-05-14 23:33:51.872278] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.822 [2024-05-14 23:33:51.872372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.822 [2024-05-14 23:33:51.872389] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.822 [2024-05-14 23:33:51.872428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.822 [2024-05-14 23:33:51.872440] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:28.822 [2024-05-14 23:33:51.872459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:28.822 [2024-05-14 23:33:51.872468] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:28.822 [2024-05-14 23:33:51.872502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:28.822 23:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.822 [2024-05-14 23:33:52.103649] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.822 BaseBdev1 00:18:29.080 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:18:29.080 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:29.080 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:29.080 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:29.080 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:29.080 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:29.080 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.080 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:29.337 [ 00:18:29.337 { 00:18:29.337 "name": "BaseBdev1", 00:18:29.337 "aliases": [ 00:18:29.337 "013459c9-1467-4742-8c2a-06ad1096475b" 00:18:29.337 ], 00:18:29.337 "product_name": "Malloc disk", 00:18:29.337 "block_size": 512, 00:18:29.337 "num_blocks": 65536, 00:18:29.337 "uuid": "013459c9-1467-4742-8c2a-06ad1096475b", 00:18:29.337 "assigned_rate_limits": { 00:18:29.337 "rw_ios_per_sec": 0, 00:18:29.337 "rw_mbytes_per_sec": 0, 00:18:29.337 "r_mbytes_per_sec": 0, 00:18:29.337 "w_mbytes_per_sec": 0 00:18:29.337 }, 00:18:29.337 "claimed": true, 00:18:29.337 "claim_type": "exclusive_write", 00:18:29.337 "zoned": false, 00:18:29.337 "supported_io_types": { 00:18:29.337 "read": true, 00:18:29.337 "write": true, 00:18:29.337 "unmap": true, 00:18:29.337 "write_zeroes": true, 00:18:29.337 "flush": true, 00:18:29.337 "reset": true, 00:18:29.337 "compare": false, 00:18:29.337 "compare_and_write": false, 00:18:29.337 "abort": true, 00:18:29.337 "nvme_admin": false, 00:18:29.337 "nvme_io": false 00:18:29.337 }, 00:18:29.337 "memory_domains": [ 00:18:29.337 { 00:18:29.337 "dma_device_id": "system", 00:18:29.337 "dma_device_type": 1 00:18:29.337 }, 00:18:29.337 { 00:18:29.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.337 "dma_device_type": 2 00:18:29.337 } 00:18:29.337 ], 00:18:29.337 "driver_specific": {} 00:18:29.337 } 00:18:29.337 ] 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.337 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.594 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.594 "name": "Existed_Raid", 00:18:29.594 "uuid": "b57c62a2-9406-4b76-99ca-1573ffe5d734", 00:18:29.594 "strip_size_kb": 64, 00:18:29.594 "state": "configuring", 00:18:29.595 "raid_level": "raid0", 00:18:29.595 "superblock": true, 00:18:29.595 "num_base_bdevs": 4, 00:18:29.595 "num_base_bdevs_discovered": 1, 00:18:29.595 "num_base_bdevs_operational": 4, 00:18:29.595 "base_bdevs_list": [ 00:18:29.595 { 00:18:29.595 "name": "BaseBdev1", 00:18:29.595 "uuid": "013459c9-1467-4742-8c2a-06ad1096475b", 00:18:29.595 "is_configured": true, 00:18:29.595 "data_offset": 2048, 00:18:29.595 "data_size": 63488 00:18:29.595 }, 00:18:29.595 { 00:18:29.595 "name": "BaseBdev2", 00:18:29.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.595 "is_configured": false, 00:18:29.595 "data_offset": 0, 00:18:29.595 "data_size": 0 00:18:29.595 }, 00:18:29.595 { 00:18:29.595 "name": "BaseBdev3", 00:18:29.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.595 "is_configured": false, 00:18:29.595 "data_offset": 0, 00:18:29.595 "data_size": 0 00:18:29.595 }, 00:18:29.595 { 00:18:29.595 "name": "BaseBdev4", 00:18:29.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.595 "is_configured": false, 00:18:29.595 "data_offset": 0, 00:18:29.595 "data_size": 0 00:18:29.595 } 00:18:29.595 ] 00:18:29.595 }' 00:18:29.595 23:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.595 23:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.221 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:30.488 [2024-05-14 23:33:53.507872] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.488 [2024-05-14 23:33:53.507940] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:18:30.488 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:30.488 [2024-05-14 23:33:53.711972] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.488 [2024-05-14 23:33:53.713620] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.488 [2024-05-14 23:33:53.713701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.488 [2024-05-14 23:33:53.713727] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:30.488 [2024-05-14 23:33:53.713757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:30.488 [2024-05-14 23:33:53.713768] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:30.488 [2024-05-14 23:33:53.713787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:30.488 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:18:30.488 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:30.488 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:30.488 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.489 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.746 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.746 "name": "Existed_Raid", 00:18:30.746 "uuid": "5d23dc32-5563-46a3-a2e4-25c5b8119382", 00:18:30.746 "strip_size_kb": 64, 00:18:30.746 "state": "configuring", 00:18:30.746 "raid_level": "raid0", 00:18:30.746 "superblock": true, 00:18:30.746 "num_base_bdevs": 4, 00:18:30.746 "num_base_bdevs_discovered": 1, 00:18:30.746 "num_base_bdevs_operational": 4, 00:18:30.746 "base_bdevs_list": [ 00:18:30.746 { 00:18:30.746 "name": "BaseBdev1", 00:18:30.746 "uuid": "013459c9-1467-4742-8c2a-06ad1096475b", 00:18:30.746 "is_configured": true, 00:18:30.746 "data_offset": 2048, 00:18:30.746 "data_size": 63488 00:18:30.746 }, 00:18:30.746 { 00:18:30.746 "name": "BaseBdev2", 00:18:30.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.746 "is_configured": false, 00:18:30.746 "data_offset": 0, 00:18:30.746 "data_size": 0 00:18:30.746 }, 00:18:30.746 { 00:18:30.746 "name": "BaseBdev3", 00:18:30.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.746 "is_configured": false, 00:18:30.746 "data_offset": 0, 00:18:30.746 "data_size": 0 00:18:30.746 }, 00:18:30.746 { 00:18:30.746 "name": "BaseBdev4", 00:18:30.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.746 "is_configured": false, 00:18:30.746 "data_offset": 0, 00:18:30.746 "data_size": 0 00:18:30.746 } 00:18:30.746 ] 00:18:30.746 }' 00:18:30.746 23:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.746 23:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.312 23:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:31.570 BaseBdev2 00:18:31.570 [2024-05-14 23:33:54.773331] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.570 23:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:18:31.570 23:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:31.570 23:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:31.570 23:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:31.570 23:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:31.570 23:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:31.570 23:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.827 23:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.086 [ 00:18:32.086 { 00:18:32.086 "name": "BaseBdev2", 00:18:32.086 "aliases": [ 00:18:32.086 "38e49da8-92d9-4847-95ea-cfd4c900214f" 00:18:32.086 ], 00:18:32.086 "product_name": "Malloc disk", 00:18:32.086 "block_size": 512, 00:18:32.086 "num_blocks": 65536, 00:18:32.086 "uuid": "38e49da8-92d9-4847-95ea-cfd4c900214f", 00:18:32.086 "assigned_rate_limits": { 00:18:32.086 "rw_ios_per_sec": 0, 00:18:32.086 "rw_mbytes_per_sec": 0, 00:18:32.086 "r_mbytes_per_sec": 0, 00:18:32.086 "w_mbytes_per_sec": 0 00:18:32.086 }, 00:18:32.086 "claimed": true, 00:18:32.086 "claim_type": "exclusive_write", 00:18:32.086 "zoned": false, 00:18:32.086 "supported_io_types": { 00:18:32.086 "read": true, 00:18:32.086 "write": true, 00:18:32.086 "unmap": true, 00:18:32.086 "write_zeroes": true, 00:18:32.086 "flush": true, 00:18:32.086 "reset": true, 00:18:32.086 "compare": false, 00:18:32.086 "compare_and_write": false, 00:18:32.086 "abort": true, 00:18:32.086 "nvme_admin": false, 00:18:32.086 "nvme_io": false 00:18:32.086 }, 00:18:32.086 "memory_domains": [ 00:18:32.086 { 00:18:32.086 "dma_device_id": "system", 00:18:32.086 "dma_device_type": 1 00:18:32.086 }, 00:18:32.086 { 00:18:32.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.086 "dma_device_type": 2 00:18:32.086 } 00:18:32.086 ], 00:18:32.086 "driver_specific": {} 00:18:32.086 } 00:18:32.086 ] 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.086 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.344 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.344 "name": "Existed_Raid", 00:18:32.344 "uuid": "5d23dc32-5563-46a3-a2e4-25c5b8119382", 00:18:32.344 "strip_size_kb": 64, 00:18:32.344 "state": "configuring", 00:18:32.344 "raid_level": "raid0", 00:18:32.344 "superblock": true, 00:18:32.344 "num_base_bdevs": 4, 00:18:32.344 "num_base_bdevs_discovered": 2, 00:18:32.344 "num_base_bdevs_operational": 4, 00:18:32.344 "base_bdevs_list": [ 00:18:32.344 { 00:18:32.344 "name": "BaseBdev1", 00:18:32.344 "uuid": "013459c9-1467-4742-8c2a-06ad1096475b", 00:18:32.344 "is_configured": true, 00:18:32.344 "data_offset": 2048, 00:18:32.344 "data_size": 63488 00:18:32.344 }, 00:18:32.344 { 00:18:32.344 "name": "BaseBdev2", 00:18:32.344 "uuid": "38e49da8-92d9-4847-95ea-cfd4c900214f", 00:18:32.344 "is_configured": true, 00:18:32.344 "data_offset": 2048, 00:18:32.344 "data_size": 63488 00:18:32.344 }, 00:18:32.344 { 00:18:32.344 "name": "BaseBdev3", 00:18:32.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.344 "is_configured": false, 00:18:32.344 "data_offset": 0, 00:18:32.344 "data_size": 0 00:18:32.344 }, 00:18:32.344 { 00:18:32.344 "name": "BaseBdev4", 00:18:32.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.344 "is_configured": false, 00:18:32.344 "data_offset": 0, 00:18:32.344 "data_size": 0 00:18:32.344 } 00:18:32.344 ] 00:18:32.344 }' 00:18:32.344 23:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.344 23:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.910 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:33.168 [2024-05-14 23:33:56.246676] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.168 BaseBdev3 00:18:33.168 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:18:33.168 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:33.168 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:33.168 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:33.168 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:33.168 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:33.168 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.168 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:33.427 [ 00:18:33.427 { 00:18:33.427 "name": "BaseBdev3", 00:18:33.427 "aliases": [ 00:18:33.427 "074f5b14-ad64-4d4a-82ed-0cf57dff3f9c" 00:18:33.427 ], 00:18:33.427 "product_name": "Malloc disk", 00:18:33.427 "block_size": 512, 00:18:33.427 "num_blocks": 65536, 00:18:33.427 "uuid": "074f5b14-ad64-4d4a-82ed-0cf57dff3f9c", 00:18:33.427 "assigned_rate_limits": { 00:18:33.427 "rw_ios_per_sec": 0, 00:18:33.427 "rw_mbytes_per_sec": 0, 00:18:33.427 "r_mbytes_per_sec": 0, 00:18:33.427 "w_mbytes_per_sec": 0 00:18:33.427 }, 00:18:33.427 "claimed": true, 00:18:33.427 "claim_type": "exclusive_write", 00:18:33.427 "zoned": false, 00:18:33.427 "supported_io_types": { 00:18:33.427 "read": true, 00:18:33.427 "write": true, 00:18:33.427 "unmap": true, 00:18:33.427 "write_zeroes": true, 00:18:33.427 "flush": true, 00:18:33.427 "reset": true, 00:18:33.427 "compare": false, 00:18:33.427 "compare_and_write": false, 00:18:33.427 "abort": true, 00:18:33.427 "nvme_admin": false, 00:18:33.427 "nvme_io": false 00:18:33.427 }, 00:18:33.427 "memory_domains": [ 00:18:33.427 { 00:18:33.427 "dma_device_id": "system", 00:18:33.427 "dma_device_type": 1 00:18:33.427 }, 00:18:33.427 { 00:18:33.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.427 "dma_device_type": 2 00:18:33.427 } 00:18:33.427 ], 00:18:33.427 "driver_specific": {} 00:18:33.427 } 00:18:33.427 ] 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.427 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.686 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.686 "name": "Existed_Raid", 00:18:33.686 "uuid": "5d23dc32-5563-46a3-a2e4-25c5b8119382", 00:18:33.686 "strip_size_kb": 64, 00:18:33.686 "state": "configuring", 00:18:33.686 "raid_level": "raid0", 00:18:33.686 "superblock": true, 00:18:33.686 "num_base_bdevs": 4, 00:18:33.686 "num_base_bdevs_discovered": 3, 00:18:33.686 "num_base_bdevs_operational": 4, 00:18:33.686 "base_bdevs_list": [ 00:18:33.686 { 00:18:33.686 "name": "BaseBdev1", 00:18:33.686 "uuid": "013459c9-1467-4742-8c2a-06ad1096475b", 00:18:33.686 "is_configured": true, 00:18:33.686 "data_offset": 2048, 00:18:33.686 "data_size": 63488 00:18:33.686 }, 00:18:33.686 { 00:18:33.686 "name": "BaseBdev2", 00:18:33.686 "uuid": "38e49da8-92d9-4847-95ea-cfd4c900214f", 00:18:33.686 "is_configured": true, 00:18:33.686 "data_offset": 2048, 00:18:33.686 "data_size": 63488 00:18:33.686 }, 00:18:33.686 { 00:18:33.686 "name": "BaseBdev3", 00:18:33.686 "uuid": "074f5b14-ad64-4d4a-82ed-0cf57dff3f9c", 00:18:33.686 "is_configured": true, 00:18:33.686 "data_offset": 2048, 00:18:33.686 "data_size": 63488 00:18:33.686 }, 00:18:33.686 { 00:18:33.686 "name": "BaseBdev4", 00:18:33.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.686 "is_configured": false, 00:18:33.686 "data_offset": 0, 00:18:33.686 "data_size": 0 00:18:33.686 } 00:18:33.686 ] 00:18:33.686 }' 00:18:33.686 23:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.686 23:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.641 23:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:34.642 [2024-05-14 23:33:57.876690] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:34.642 [2024-05-14 23:33:57.876891] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:18:34.642 [2024-05-14 23:33:57.876908] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:34.642 [2024-05-14 23:33:57.877022] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:34.642 BaseBdev4 00:18:34.642 [2024-05-14 23:33:57.877495] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:18:34.642 [2024-05-14 23:33:57.877515] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:18:34.642 [2024-05-14 23:33:57.877637] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.642 23:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:18:34.642 23:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:34.642 23:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:34.642 23:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:34.642 23:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:34.642 23:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:34.642 23:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:34.956 23:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:35.215 [ 00:18:35.215 { 00:18:35.215 "name": "BaseBdev4", 00:18:35.215 "aliases": [ 00:18:35.215 "8e26c8e2-2f9a-4f4b-9f17-66245ab671c3" 00:18:35.215 ], 00:18:35.215 "product_name": "Malloc disk", 00:18:35.215 "block_size": 512, 00:18:35.215 "num_blocks": 65536, 00:18:35.215 "uuid": "8e26c8e2-2f9a-4f4b-9f17-66245ab671c3", 00:18:35.215 "assigned_rate_limits": { 00:18:35.215 "rw_ios_per_sec": 0, 00:18:35.215 "rw_mbytes_per_sec": 0, 00:18:35.215 "r_mbytes_per_sec": 0, 00:18:35.215 "w_mbytes_per_sec": 0 00:18:35.215 }, 00:18:35.215 "claimed": true, 00:18:35.215 "claim_type": "exclusive_write", 00:18:35.215 "zoned": false, 00:18:35.215 "supported_io_types": { 00:18:35.215 "read": true, 00:18:35.215 "write": true, 00:18:35.215 "unmap": true, 00:18:35.215 "write_zeroes": true, 00:18:35.215 "flush": true, 00:18:35.215 "reset": true, 00:18:35.215 "compare": false, 00:18:35.215 "compare_and_write": false, 00:18:35.215 "abort": true, 00:18:35.215 "nvme_admin": false, 00:18:35.215 "nvme_io": false 00:18:35.215 }, 00:18:35.215 "memory_domains": [ 00:18:35.215 { 00:18:35.215 "dma_device_id": "system", 00:18:35.215 "dma_device_type": 1 00:18:35.215 }, 00:18:35.215 { 00:18:35.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.215 "dma_device_type": 2 00:18:35.215 } 00:18:35.215 ], 00:18:35.215 "driver_specific": {} 00:18:35.215 } 00:18:35.215 ] 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.215 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.474 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.474 "name": "Existed_Raid", 00:18:35.474 "uuid": "5d23dc32-5563-46a3-a2e4-25c5b8119382", 00:18:35.474 "strip_size_kb": 64, 00:18:35.474 "state": "online", 00:18:35.474 "raid_level": "raid0", 00:18:35.474 "superblock": true, 00:18:35.474 "num_base_bdevs": 4, 00:18:35.474 "num_base_bdevs_discovered": 4, 00:18:35.474 "num_base_bdevs_operational": 4, 00:18:35.474 "base_bdevs_list": [ 00:18:35.474 { 00:18:35.474 "name": "BaseBdev1", 00:18:35.474 "uuid": "013459c9-1467-4742-8c2a-06ad1096475b", 00:18:35.474 "is_configured": true, 00:18:35.474 "data_offset": 2048, 00:18:35.474 "data_size": 63488 00:18:35.474 }, 00:18:35.474 { 00:18:35.474 "name": "BaseBdev2", 00:18:35.474 "uuid": "38e49da8-92d9-4847-95ea-cfd4c900214f", 00:18:35.474 "is_configured": true, 00:18:35.474 "data_offset": 2048, 00:18:35.474 "data_size": 63488 00:18:35.474 }, 00:18:35.474 { 00:18:35.474 "name": "BaseBdev3", 00:18:35.474 "uuid": "074f5b14-ad64-4d4a-82ed-0cf57dff3f9c", 00:18:35.474 "is_configured": true, 00:18:35.474 "data_offset": 2048, 00:18:35.474 "data_size": 63488 00:18:35.474 }, 00:18:35.474 { 00:18:35.474 "name": "BaseBdev4", 00:18:35.474 "uuid": "8e26c8e2-2f9a-4f4b-9f17-66245ab671c3", 00:18:35.474 "is_configured": true, 00:18:35.474 "data_offset": 2048, 00:18:35.474 "data_size": 63488 00:18:35.474 } 00:18:35.474 ] 00:18:35.474 }' 00:18:35.474 23:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.474 23:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.041 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:18:36.041 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:36.041 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:36.041 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:36.041 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:36.041 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:18:36.041 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:36.041 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:36.300 [2024-05-14 23:33:59.489161] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.300 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:36.300 "name": "Existed_Raid", 00:18:36.300 "aliases": [ 00:18:36.300 "5d23dc32-5563-46a3-a2e4-25c5b8119382" 00:18:36.300 ], 00:18:36.300 "product_name": "Raid Volume", 00:18:36.300 "block_size": 512, 00:18:36.300 "num_blocks": 253952, 00:18:36.300 "uuid": "5d23dc32-5563-46a3-a2e4-25c5b8119382", 00:18:36.300 "assigned_rate_limits": { 00:18:36.300 "rw_ios_per_sec": 0, 00:18:36.300 "rw_mbytes_per_sec": 0, 00:18:36.300 "r_mbytes_per_sec": 0, 00:18:36.300 "w_mbytes_per_sec": 0 00:18:36.300 }, 00:18:36.300 "claimed": false, 00:18:36.300 "zoned": false, 00:18:36.300 "supported_io_types": { 00:18:36.300 "read": true, 00:18:36.300 "write": true, 00:18:36.300 "unmap": true, 00:18:36.300 "write_zeroes": true, 00:18:36.300 "flush": true, 00:18:36.300 "reset": true, 00:18:36.300 "compare": false, 00:18:36.300 "compare_and_write": false, 00:18:36.300 "abort": false, 00:18:36.300 "nvme_admin": false, 00:18:36.300 "nvme_io": false 00:18:36.300 }, 00:18:36.300 "memory_domains": [ 00:18:36.300 { 00:18:36.300 "dma_device_id": "system", 00:18:36.300 "dma_device_type": 1 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.300 "dma_device_type": 2 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "dma_device_id": "system", 00:18:36.300 "dma_device_type": 1 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.300 "dma_device_type": 2 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "dma_device_id": "system", 00:18:36.300 "dma_device_type": 1 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.300 "dma_device_type": 2 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "dma_device_id": "system", 00:18:36.300 "dma_device_type": 1 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.300 "dma_device_type": 2 00:18:36.300 } 00:18:36.300 ], 00:18:36.300 "driver_specific": { 00:18:36.300 "raid": { 00:18:36.300 "uuid": "5d23dc32-5563-46a3-a2e4-25c5b8119382", 00:18:36.300 "strip_size_kb": 64, 00:18:36.300 "state": "online", 00:18:36.300 "raid_level": "raid0", 00:18:36.300 "superblock": true, 00:18:36.300 "num_base_bdevs": 4, 00:18:36.300 "num_base_bdevs_discovered": 4, 00:18:36.300 "num_base_bdevs_operational": 4, 00:18:36.300 "base_bdevs_list": [ 00:18:36.300 { 00:18:36.300 "name": "BaseBdev1", 00:18:36.300 "uuid": "013459c9-1467-4742-8c2a-06ad1096475b", 00:18:36.300 "is_configured": true, 00:18:36.300 "data_offset": 2048, 00:18:36.300 "data_size": 63488 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "name": "BaseBdev2", 00:18:36.300 "uuid": "38e49da8-92d9-4847-95ea-cfd4c900214f", 00:18:36.300 "is_configured": true, 00:18:36.300 "data_offset": 2048, 00:18:36.300 "data_size": 63488 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "name": "BaseBdev3", 00:18:36.300 "uuid": "074f5b14-ad64-4d4a-82ed-0cf57dff3f9c", 00:18:36.300 "is_configured": true, 00:18:36.300 "data_offset": 2048, 00:18:36.300 "data_size": 63488 00:18:36.300 }, 00:18:36.300 { 00:18:36.300 "name": "BaseBdev4", 00:18:36.300 "uuid": "8e26c8e2-2f9a-4f4b-9f17-66245ab671c3", 00:18:36.300 "is_configured": true, 00:18:36.300 "data_offset": 2048, 00:18:36.300 "data_size": 63488 00:18:36.300 } 00:18:36.300 ] 00:18:36.300 } 00:18:36.300 } 00:18:36.300 }' 00:18:36.301 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.301 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:18:36.301 BaseBdev2 00:18:36.301 BaseBdev3 00:18:36.301 BaseBdev4' 00:18:36.301 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:36.301 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:36.301 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:36.560 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:36.560 "name": "BaseBdev1", 00:18:36.560 "aliases": [ 00:18:36.560 "013459c9-1467-4742-8c2a-06ad1096475b" 00:18:36.560 ], 00:18:36.560 "product_name": "Malloc disk", 00:18:36.560 "block_size": 512, 00:18:36.560 "num_blocks": 65536, 00:18:36.560 "uuid": "013459c9-1467-4742-8c2a-06ad1096475b", 00:18:36.560 "assigned_rate_limits": { 00:18:36.560 "rw_ios_per_sec": 0, 00:18:36.560 "rw_mbytes_per_sec": 0, 00:18:36.560 "r_mbytes_per_sec": 0, 00:18:36.560 "w_mbytes_per_sec": 0 00:18:36.560 }, 00:18:36.560 "claimed": true, 00:18:36.560 "claim_type": "exclusive_write", 00:18:36.560 "zoned": false, 00:18:36.560 "supported_io_types": { 00:18:36.560 "read": true, 00:18:36.560 "write": true, 00:18:36.560 "unmap": true, 00:18:36.560 "write_zeroes": true, 00:18:36.560 "flush": true, 00:18:36.560 "reset": true, 00:18:36.560 "compare": false, 00:18:36.560 "compare_and_write": false, 00:18:36.560 "abort": true, 00:18:36.560 "nvme_admin": false, 00:18:36.560 "nvme_io": false 00:18:36.560 }, 00:18:36.560 "memory_domains": [ 00:18:36.560 { 00:18:36.560 "dma_device_id": "system", 00:18:36.560 "dma_device_type": 1 00:18:36.560 }, 00:18:36.560 { 00:18:36.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.560 "dma_device_type": 2 00:18:36.560 } 00:18:36.560 ], 00:18:36.560 "driver_specific": {} 00:18:36.560 }' 00:18:36.560 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:36.819 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:36.819 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:36.819 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:36.819 23:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:36.819 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:36.819 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.078 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.078 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.078 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.078 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.078 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:37.078 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:37.078 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:37.078 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:37.337 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:37.337 "name": "BaseBdev2", 00:18:37.337 "aliases": [ 00:18:37.337 "38e49da8-92d9-4847-95ea-cfd4c900214f" 00:18:37.337 ], 00:18:37.337 "product_name": "Malloc disk", 00:18:37.337 "block_size": 512, 00:18:37.337 "num_blocks": 65536, 00:18:37.337 "uuid": "38e49da8-92d9-4847-95ea-cfd4c900214f", 00:18:37.337 "assigned_rate_limits": { 00:18:37.337 "rw_ios_per_sec": 0, 00:18:37.337 "rw_mbytes_per_sec": 0, 00:18:37.337 "r_mbytes_per_sec": 0, 00:18:37.337 "w_mbytes_per_sec": 0 00:18:37.337 }, 00:18:37.337 "claimed": true, 00:18:37.337 "claim_type": "exclusive_write", 00:18:37.337 "zoned": false, 00:18:37.337 "supported_io_types": { 00:18:37.337 "read": true, 00:18:37.337 "write": true, 00:18:37.337 "unmap": true, 00:18:37.337 "write_zeroes": true, 00:18:37.337 "flush": true, 00:18:37.337 "reset": true, 00:18:37.337 "compare": false, 00:18:37.337 "compare_and_write": false, 00:18:37.337 "abort": true, 00:18:37.337 "nvme_admin": false, 00:18:37.337 "nvme_io": false 00:18:37.337 }, 00:18:37.337 "memory_domains": [ 00:18:37.337 { 00:18:37.337 "dma_device_id": "system", 00:18:37.337 "dma_device_type": 1 00:18:37.337 }, 00:18:37.337 { 00:18:37.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.337 "dma_device_type": 2 00:18:37.337 } 00:18:37.337 ], 00:18:37.337 "driver_specific": {} 00:18:37.337 }' 00:18:37.337 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.337 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:37.337 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:37.337 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:37.596 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:37.596 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.596 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.596 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:37.596 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.596 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.596 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:37.855 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:37.855 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:37.855 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:37.856 23:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:38.115 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:38.115 "name": "BaseBdev3", 00:18:38.115 "aliases": [ 00:18:38.115 "074f5b14-ad64-4d4a-82ed-0cf57dff3f9c" 00:18:38.115 ], 00:18:38.115 "product_name": "Malloc disk", 00:18:38.115 "block_size": 512, 00:18:38.115 "num_blocks": 65536, 00:18:38.115 "uuid": "074f5b14-ad64-4d4a-82ed-0cf57dff3f9c", 00:18:38.115 "assigned_rate_limits": { 00:18:38.115 "rw_ios_per_sec": 0, 00:18:38.115 "rw_mbytes_per_sec": 0, 00:18:38.115 "r_mbytes_per_sec": 0, 00:18:38.115 "w_mbytes_per_sec": 0 00:18:38.115 }, 00:18:38.115 "claimed": true, 00:18:38.115 "claim_type": "exclusive_write", 00:18:38.115 "zoned": false, 00:18:38.115 "supported_io_types": { 00:18:38.115 "read": true, 00:18:38.115 "write": true, 00:18:38.115 "unmap": true, 00:18:38.115 "write_zeroes": true, 00:18:38.115 "flush": true, 00:18:38.115 "reset": true, 00:18:38.115 "compare": false, 00:18:38.115 "compare_and_write": false, 00:18:38.115 "abort": true, 00:18:38.115 "nvme_admin": false, 00:18:38.115 "nvme_io": false 00:18:38.115 }, 00:18:38.115 "memory_domains": [ 00:18:38.115 { 00:18:38.115 "dma_device_id": "system", 00:18:38.115 "dma_device_type": 1 00:18:38.115 }, 00:18:38.115 { 00:18:38.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.115 "dma_device_type": 2 00:18:38.115 } 00:18:38.115 ], 00:18:38.115 "driver_specific": {} 00:18:38.115 }' 00:18:38.115 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:38.115 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:38.115 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:38.115 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:38.115 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:38.115 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.115 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:38.374 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:38.374 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.374 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:38.374 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:38.374 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:38.374 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:38.374 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:38.374 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:38.633 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:38.633 "name": "BaseBdev4", 00:18:38.633 "aliases": [ 00:18:38.633 "8e26c8e2-2f9a-4f4b-9f17-66245ab671c3" 00:18:38.633 ], 00:18:38.633 "product_name": "Malloc disk", 00:18:38.633 "block_size": 512, 00:18:38.633 "num_blocks": 65536, 00:18:38.633 "uuid": "8e26c8e2-2f9a-4f4b-9f17-66245ab671c3", 00:18:38.633 "assigned_rate_limits": { 00:18:38.633 "rw_ios_per_sec": 0, 00:18:38.633 "rw_mbytes_per_sec": 0, 00:18:38.633 "r_mbytes_per_sec": 0, 00:18:38.633 "w_mbytes_per_sec": 0 00:18:38.633 }, 00:18:38.633 "claimed": true, 00:18:38.633 "claim_type": "exclusive_write", 00:18:38.633 "zoned": false, 00:18:38.633 "supported_io_types": { 00:18:38.633 "read": true, 00:18:38.633 "write": true, 00:18:38.633 "unmap": true, 00:18:38.633 "write_zeroes": true, 00:18:38.633 "flush": true, 00:18:38.633 "reset": true, 00:18:38.633 "compare": false, 00:18:38.633 "compare_and_write": false, 00:18:38.633 "abort": true, 00:18:38.633 "nvme_admin": false, 00:18:38.633 "nvme_io": false 00:18:38.633 }, 00:18:38.633 "memory_domains": [ 00:18:38.633 { 00:18:38.633 "dma_device_id": "system", 00:18:38.633 "dma_device_type": 1 00:18:38.633 }, 00:18:38.633 { 00:18:38.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.633 "dma_device_type": 2 00:18:38.633 } 00:18:38.633 ], 00:18:38.633 "driver_specific": {} 00:18:38.633 }' 00:18:38.633 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:38.633 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:38.948 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:38.948 23:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:38.948 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:38.948 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.948 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:38.948 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:38.948 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.948 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:38.948 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:39.207 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:39.207 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:39.207 [2024-05-14 23:34:02.441413] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.207 [2024-05-14 23:34:02.441488] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.207 [2024-05-14 23:34:02.441537] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.466 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.725 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:39.725 "name": "Existed_Raid", 00:18:39.725 "uuid": "5d23dc32-5563-46a3-a2e4-25c5b8119382", 00:18:39.725 "strip_size_kb": 64, 00:18:39.725 "state": "offline", 00:18:39.725 "raid_level": "raid0", 00:18:39.725 "superblock": true, 00:18:39.725 "num_base_bdevs": 4, 00:18:39.725 "num_base_bdevs_discovered": 3, 00:18:39.725 "num_base_bdevs_operational": 3, 00:18:39.725 "base_bdevs_list": [ 00:18:39.725 { 00:18:39.725 "name": null, 00:18:39.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.725 "is_configured": false, 00:18:39.725 "data_offset": 2048, 00:18:39.725 "data_size": 63488 00:18:39.725 }, 00:18:39.725 { 00:18:39.725 "name": "BaseBdev2", 00:18:39.725 "uuid": "38e49da8-92d9-4847-95ea-cfd4c900214f", 00:18:39.725 "is_configured": true, 00:18:39.725 "data_offset": 2048, 00:18:39.725 "data_size": 63488 00:18:39.725 }, 00:18:39.725 { 00:18:39.725 "name": "BaseBdev3", 00:18:39.725 "uuid": "074f5b14-ad64-4d4a-82ed-0cf57dff3f9c", 00:18:39.725 "is_configured": true, 00:18:39.725 "data_offset": 2048, 00:18:39.725 "data_size": 63488 00:18:39.725 }, 00:18:39.725 { 00:18:39.725 "name": "BaseBdev4", 00:18:39.725 "uuid": "8e26c8e2-2f9a-4f4b-9f17-66245ab671c3", 00:18:39.725 "is_configured": true, 00:18:39.725 "data_offset": 2048, 00:18:39.725 "data_size": 63488 00:18:39.725 } 00:18:39.725 ] 00:18:39.725 }' 00:18:39.725 23:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:39.725 23:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.292 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:40.292 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:40.292 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.292 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:40.550 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:40.550 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.550 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:40.550 [2024-05-14 23:34:03.819041] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:40.808 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:40.808 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:40.808 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:40.808 23:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.067 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:41.067 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.067 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:41.326 [2024-05-14 23:34:04.379958] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:41.326 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:41.326 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:41.326 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:18:41.326 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.585 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:18:41.585 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.585 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:41.843 [2024-05-14 23:34:04.887882] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:41.843 [2024-05-14 23:34:04.887966] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:18:41.843 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:41.843 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:41.843 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.843 23:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:18:42.102 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:18:42.102 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:18:42.102 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:18:42.102 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:18:42.102 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:42.102 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:42.360 BaseBdev2 00:18:42.361 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:18:42.361 23:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:42.361 23:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:42.361 23:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:42.361 23:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:42.361 23:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:42.361 23:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.361 23:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.620 [ 00:18:42.620 { 00:18:42.620 "name": "BaseBdev2", 00:18:42.620 "aliases": [ 00:18:42.620 "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6" 00:18:42.620 ], 00:18:42.620 "product_name": "Malloc disk", 00:18:42.620 "block_size": 512, 00:18:42.620 "num_blocks": 65536, 00:18:42.620 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:42.620 "assigned_rate_limits": { 00:18:42.620 "rw_ios_per_sec": 0, 00:18:42.620 "rw_mbytes_per_sec": 0, 00:18:42.620 "r_mbytes_per_sec": 0, 00:18:42.620 "w_mbytes_per_sec": 0 00:18:42.620 }, 00:18:42.620 "claimed": false, 00:18:42.620 "zoned": false, 00:18:42.620 "supported_io_types": { 00:18:42.620 "read": true, 00:18:42.620 "write": true, 00:18:42.620 "unmap": true, 00:18:42.620 "write_zeroes": true, 00:18:42.620 "flush": true, 00:18:42.620 "reset": true, 00:18:42.620 "compare": false, 00:18:42.620 "compare_and_write": false, 00:18:42.620 "abort": true, 00:18:42.620 "nvme_admin": false, 00:18:42.620 "nvme_io": false 00:18:42.620 }, 00:18:42.620 "memory_domains": [ 00:18:42.620 { 00:18:42.620 "dma_device_id": "system", 00:18:42.620 "dma_device_type": 1 00:18:42.620 }, 00:18:42.620 { 00:18:42.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.620 "dma_device_type": 2 00:18:42.620 } 00:18:42.620 ], 00:18:42.620 "driver_specific": {} 00:18:42.620 } 00:18:42.620 ] 00:18:42.620 23:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:42.620 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:42.620 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:42.620 23:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.878 BaseBdev3 00:18:42.878 23:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:18:42.878 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:42.878 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:42.878 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:42.878 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:42.878 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:42.878 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.137 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:43.396 [ 00:18:43.396 { 00:18:43.396 "name": "BaseBdev3", 00:18:43.396 "aliases": [ 00:18:43.396 "285f1bdc-1eae-4295-9702-2b4ba305d0a2" 00:18:43.396 ], 00:18:43.396 "product_name": "Malloc disk", 00:18:43.396 "block_size": 512, 00:18:43.396 "num_blocks": 65536, 00:18:43.396 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:43.396 "assigned_rate_limits": { 00:18:43.396 "rw_ios_per_sec": 0, 00:18:43.396 "rw_mbytes_per_sec": 0, 00:18:43.396 "r_mbytes_per_sec": 0, 00:18:43.396 "w_mbytes_per_sec": 0 00:18:43.396 }, 00:18:43.396 "claimed": false, 00:18:43.396 "zoned": false, 00:18:43.396 "supported_io_types": { 00:18:43.396 "read": true, 00:18:43.396 "write": true, 00:18:43.396 "unmap": true, 00:18:43.396 "write_zeroes": true, 00:18:43.396 "flush": true, 00:18:43.396 "reset": true, 00:18:43.396 "compare": false, 00:18:43.396 "compare_and_write": false, 00:18:43.396 "abort": true, 00:18:43.396 "nvme_admin": false, 00:18:43.396 "nvme_io": false 00:18:43.396 }, 00:18:43.396 "memory_domains": [ 00:18:43.396 { 00:18:43.396 "dma_device_id": "system", 00:18:43.396 "dma_device_type": 1 00:18:43.396 }, 00:18:43.396 { 00:18:43.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.396 "dma_device_type": 2 00:18:43.396 } 00:18:43.396 ], 00:18:43.396 "driver_specific": {} 00:18:43.396 } 00:18:43.396 ] 00:18:43.396 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:43.396 23:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:43.396 23:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:43.396 23:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:43.655 BaseBdev4 00:18:43.655 23:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:18:43.655 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:18:43.655 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:43.655 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:43.655 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:43.655 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:43.655 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.914 23:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:43.914 [ 00:18:43.914 { 00:18:43.914 "name": "BaseBdev4", 00:18:43.914 "aliases": [ 00:18:43.914 "7cda21c8-f658-42fb-93e0-8ec43e8dfa97" 00:18:43.914 ], 00:18:43.914 "product_name": "Malloc disk", 00:18:43.914 "block_size": 512, 00:18:43.914 "num_blocks": 65536, 00:18:43.914 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:43.914 "assigned_rate_limits": { 00:18:43.914 "rw_ios_per_sec": 0, 00:18:43.914 "rw_mbytes_per_sec": 0, 00:18:43.914 "r_mbytes_per_sec": 0, 00:18:43.914 "w_mbytes_per_sec": 0 00:18:43.914 }, 00:18:43.914 "claimed": false, 00:18:43.914 "zoned": false, 00:18:43.914 "supported_io_types": { 00:18:43.914 "read": true, 00:18:43.914 "write": true, 00:18:43.914 "unmap": true, 00:18:43.914 "write_zeroes": true, 00:18:43.914 "flush": true, 00:18:43.914 "reset": true, 00:18:43.914 "compare": false, 00:18:43.914 "compare_and_write": false, 00:18:43.914 "abort": true, 00:18:43.914 "nvme_admin": false, 00:18:43.914 "nvme_io": false 00:18:43.914 }, 00:18:43.914 "memory_domains": [ 00:18:43.914 { 00:18:43.914 "dma_device_id": "system", 00:18:43.914 "dma_device_type": 1 00:18:43.914 }, 00:18:43.914 { 00:18:43.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.914 "dma_device_type": 2 00:18:43.914 } 00:18:43.914 ], 00:18:43.914 "driver_specific": {} 00:18:43.914 } 00:18:43.914 ] 00:18:43.914 23:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:43.914 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:18:43.914 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:18:43.914 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:44.173 [2024-05-14 23:34:07.357509] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:44.173 [2024-05-14 23:34:07.357609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:44.173 [2024-05-14 23:34:07.357650] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.173 [2024-05-14 23:34:07.359579] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:44.173 [2024-05-14 23:34:07.359633] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.173 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.481 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:44.481 "name": "Existed_Raid", 00:18:44.481 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:44.481 "strip_size_kb": 64, 00:18:44.481 "state": "configuring", 00:18:44.481 "raid_level": "raid0", 00:18:44.481 "superblock": true, 00:18:44.481 "num_base_bdevs": 4, 00:18:44.481 "num_base_bdevs_discovered": 3, 00:18:44.481 "num_base_bdevs_operational": 4, 00:18:44.481 "base_bdevs_list": [ 00:18:44.481 { 00:18:44.481 "name": "BaseBdev1", 00:18:44.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.481 "is_configured": false, 00:18:44.481 "data_offset": 0, 00:18:44.481 "data_size": 0 00:18:44.481 }, 00:18:44.481 { 00:18:44.481 "name": "BaseBdev2", 00:18:44.481 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:44.481 "is_configured": true, 00:18:44.481 "data_offset": 2048, 00:18:44.481 "data_size": 63488 00:18:44.481 }, 00:18:44.481 { 00:18:44.481 "name": "BaseBdev3", 00:18:44.481 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:44.481 "is_configured": true, 00:18:44.481 "data_offset": 2048, 00:18:44.481 "data_size": 63488 00:18:44.481 }, 00:18:44.481 { 00:18:44.481 "name": "BaseBdev4", 00:18:44.481 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:44.481 "is_configured": true, 00:18:44.481 "data_offset": 2048, 00:18:44.481 "data_size": 63488 00:18:44.481 } 00:18:44.481 ] 00:18:44.481 }' 00:18:44.481 23:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:44.481 23:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.048 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:45.306 [2024-05-14 23:34:08.449561] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.306 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.565 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.565 "name": "Existed_Raid", 00:18:45.565 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:45.565 "strip_size_kb": 64, 00:18:45.565 "state": "configuring", 00:18:45.565 "raid_level": "raid0", 00:18:45.565 "superblock": true, 00:18:45.565 "num_base_bdevs": 4, 00:18:45.565 "num_base_bdevs_discovered": 2, 00:18:45.565 "num_base_bdevs_operational": 4, 00:18:45.565 "base_bdevs_list": [ 00:18:45.565 { 00:18:45.565 "name": "BaseBdev1", 00:18:45.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.565 "is_configured": false, 00:18:45.565 "data_offset": 0, 00:18:45.565 "data_size": 0 00:18:45.565 }, 00:18:45.565 { 00:18:45.565 "name": null, 00:18:45.565 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:45.565 "is_configured": false, 00:18:45.565 "data_offset": 2048, 00:18:45.565 "data_size": 63488 00:18:45.565 }, 00:18:45.565 { 00:18:45.565 "name": "BaseBdev3", 00:18:45.565 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:45.565 "is_configured": true, 00:18:45.565 "data_offset": 2048, 00:18:45.565 "data_size": 63488 00:18:45.565 }, 00:18:45.565 { 00:18:45.565 "name": "BaseBdev4", 00:18:45.565 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:45.565 "is_configured": true, 00:18:45.565 "data_offset": 2048, 00:18:45.565 "data_size": 63488 00:18:45.565 } 00:18:45.565 ] 00:18:45.565 }' 00:18:45.565 23:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.565 23:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.132 23:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.132 23:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:46.390 23:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:18:46.390 23:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:46.648 BaseBdev1 00:18:46.648 [2024-05-14 23:34:09.770419] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.648 23:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:18:46.648 23:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:46.648 23:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:46.648 23:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:46.648 23:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:46.648 23:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:46.648 23:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.906 23:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:46.906 [ 00:18:46.906 { 00:18:46.906 "name": "BaseBdev1", 00:18:46.906 "aliases": [ 00:18:46.906 "f735d6c7-422c-4b99-82f8-fb877604a49b" 00:18:46.906 ], 00:18:46.906 "product_name": "Malloc disk", 00:18:46.906 "block_size": 512, 00:18:46.906 "num_blocks": 65536, 00:18:46.906 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:46.906 "assigned_rate_limits": { 00:18:46.906 "rw_ios_per_sec": 0, 00:18:46.906 "rw_mbytes_per_sec": 0, 00:18:46.906 "r_mbytes_per_sec": 0, 00:18:46.906 "w_mbytes_per_sec": 0 00:18:46.906 }, 00:18:46.906 "claimed": true, 00:18:46.906 "claim_type": "exclusive_write", 00:18:46.906 "zoned": false, 00:18:46.906 "supported_io_types": { 00:18:46.906 "read": true, 00:18:46.906 "write": true, 00:18:46.906 "unmap": true, 00:18:46.906 "write_zeroes": true, 00:18:46.906 "flush": true, 00:18:46.906 "reset": true, 00:18:46.906 "compare": false, 00:18:46.906 "compare_and_write": false, 00:18:46.906 "abort": true, 00:18:46.906 "nvme_admin": false, 00:18:46.906 "nvme_io": false 00:18:46.906 }, 00:18:46.906 "memory_domains": [ 00:18:46.906 { 00:18:46.906 "dma_device_id": "system", 00:18:46.906 "dma_device_type": 1 00:18:46.906 }, 00:18:46.906 { 00:18:46.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.906 "dma_device_type": 2 00:18:46.906 } 00:18:46.906 ], 00:18:46.906 "driver_specific": {} 00:18:46.906 } 00:18:46.906 ] 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.906 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.165 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.165 "name": "Existed_Raid", 00:18:47.165 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:47.165 "strip_size_kb": 64, 00:18:47.165 "state": "configuring", 00:18:47.165 "raid_level": "raid0", 00:18:47.165 "superblock": true, 00:18:47.165 "num_base_bdevs": 4, 00:18:47.165 "num_base_bdevs_discovered": 3, 00:18:47.165 "num_base_bdevs_operational": 4, 00:18:47.165 "base_bdevs_list": [ 00:18:47.165 { 00:18:47.165 "name": "BaseBdev1", 00:18:47.165 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:47.165 "is_configured": true, 00:18:47.165 "data_offset": 2048, 00:18:47.165 "data_size": 63488 00:18:47.165 }, 00:18:47.165 { 00:18:47.165 "name": null, 00:18:47.165 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:47.165 "is_configured": false, 00:18:47.165 "data_offset": 2048, 00:18:47.165 "data_size": 63488 00:18:47.165 }, 00:18:47.165 { 00:18:47.165 "name": "BaseBdev3", 00:18:47.165 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:47.165 "is_configured": true, 00:18:47.165 "data_offset": 2048, 00:18:47.165 "data_size": 63488 00:18:47.165 }, 00:18:47.165 { 00:18:47.165 "name": "BaseBdev4", 00:18:47.165 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:47.165 "is_configured": true, 00:18:47.165 "data_offset": 2048, 00:18:47.165 "data_size": 63488 00:18:47.165 } 00:18:47.165 ] 00:18:47.165 }' 00:18:47.165 23:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.165 23:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.101 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.101 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:48.101 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:48.101 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:48.359 [2024-05-14 23:34:11.566779] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.359 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.618 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.618 "name": "Existed_Raid", 00:18:48.618 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:48.618 "strip_size_kb": 64, 00:18:48.618 "state": "configuring", 00:18:48.618 "raid_level": "raid0", 00:18:48.618 "superblock": true, 00:18:48.618 "num_base_bdevs": 4, 00:18:48.618 "num_base_bdevs_discovered": 2, 00:18:48.618 "num_base_bdevs_operational": 4, 00:18:48.618 "base_bdevs_list": [ 00:18:48.618 { 00:18:48.618 "name": "BaseBdev1", 00:18:48.618 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:48.618 "is_configured": true, 00:18:48.618 "data_offset": 2048, 00:18:48.618 "data_size": 63488 00:18:48.618 }, 00:18:48.618 { 00:18:48.618 "name": null, 00:18:48.618 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:48.618 "is_configured": false, 00:18:48.618 "data_offset": 2048, 00:18:48.618 "data_size": 63488 00:18:48.618 }, 00:18:48.618 { 00:18:48.618 "name": null, 00:18:48.618 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:48.618 "is_configured": false, 00:18:48.618 "data_offset": 2048, 00:18:48.618 "data_size": 63488 00:18:48.618 }, 00:18:48.618 { 00:18:48.618 "name": "BaseBdev4", 00:18:48.618 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:48.618 "is_configured": true, 00:18:48.618 "data_offset": 2048, 00:18:48.618 "data_size": 63488 00:18:48.618 } 00:18:48.618 ] 00:18:48.618 }' 00:18:48.618 23:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.618 23:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.554 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:49.554 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.554 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:18:49.554 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:49.813 [2024-05-14 23:34:12.903144] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.813 23:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.071 23:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.071 "name": "Existed_Raid", 00:18:50.071 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:50.071 "strip_size_kb": 64, 00:18:50.071 "state": "configuring", 00:18:50.071 "raid_level": "raid0", 00:18:50.071 "superblock": true, 00:18:50.071 "num_base_bdevs": 4, 00:18:50.071 "num_base_bdevs_discovered": 3, 00:18:50.071 "num_base_bdevs_operational": 4, 00:18:50.071 "base_bdevs_list": [ 00:18:50.071 { 00:18:50.071 "name": "BaseBdev1", 00:18:50.071 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:50.071 "is_configured": true, 00:18:50.071 "data_offset": 2048, 00:18:50.071 "data_size": 63488 00:18:50.071 }, 00:18:50.071 { 00:18:50.071 "name": null, 00:18:50.071 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:50.071 "is_configured": false, 00:18:50.071 "data_offset": 2048, 00:18:50.071 "data_size": 63488 00:18:50.071 }, 00:18:50.071 { 00:18:50.071 "name": "BaseBdev3", 00:18:50.071 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:50.071 "is_configured": true, 00:18:50.071 "data_offset": 2048, 00:18:50.071 "data_size": 63488 00:18:50.071 }, 00:18:50.071 { 00:18:50.071 "name": "BaseBdev4", 00:18:50.071 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:50.071 "is_configured": true, 00:18:50.071 "data_offset": 2048, 00:18:50.071 "data_size": 63488 00:18:50.071 } 00:18:50.071 ] 00:18:50.071 }' 00:18:50.071 23:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.071 23:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.636 23:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.636 23:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:50.894 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:18:50.894 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:51.151 [2024-05-14 23:34:14.259450] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.151 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.409 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.409 "name": "Existed_Raid", 00:18:51.409 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:51.409 "strip_size_kb": 64, 00:18:51.409 "state": "configuring", 00:18:51.409 "raid_level": "raid0", 00:18:51.409 "superblock": true, 00:18:51.409 "num_base_bdevs": 4, 00:18:51.409 "num_base_bdevs_discovered": 2, 00:18:51.409 "num_base_bdevs_operational": 4, 00:18:51.409 "base_bdevs_list": [ 00:18:51.409 { 00:18:51.409 "name": null, 00:18:51.409 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:51.409 "is_configured": false, 00:18:51.409 "data_offset": 2048, 00:18:51.409 "data_size": 63488 00:18:51.409 }, 00:18:51.409 { 00:18:51.409 "name": null, 00:18:51.409 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:51.409 "is_configured": false, 00:18:51.409 "data_offset": 2048, 00:18:51.409 "data_size": 63488 00:18:51.409 }, 00:18:51.409 { 00:18:51.409 "name": "BaseBdev3", 00:18:51.409 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:51.409 "is_configured": true, 00:18:51.409 "data_offset": 2048, 00:18:51.409 "data_size": 63488 00:18:51.409 }, 00:18:51.409 { 00:18:51.409 "name": "BaseBdev4", 00:18:51.409 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:51.409 "is_configured": true, 00:18:51.409 "data_offset": 2048, 00:18:51.409 "data_size": 63488 00:18:51.409 } 00:18:51.409 ] 00:18:51.409 }' 00:18:51.409 23:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.409 23:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.975 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.975 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:52.233 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:18:52.233 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:52.491 [2024-05-14 23:34:15.667761] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.491 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.492 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.750 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.750 "name": "Existed_Raid", 00:18:52.750 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:52.750 "strip_size_kb": 64, 00:18:52.750 "state": "configuring", 00:18:52.750 "raid_level": "raid0", 00:18:52.750 "superblock": true, 00:18:52.750 "num_base_bdevs": 4, 00:18:52.750 "num_base_bdevs_discovered": 3, 00:18:52.750 "num_base_bdevs_operational": 4, 00:18:52.750 "base_bdevs_list": [ 00:18:52.750 { 00:18:52.750 "name": null, 00:18:52.750 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:52.750 "is_configured": false, 00:18:52.750 "data_offset": 2048, 00:18:52.750 "data_size": 63488 00:18:52.750 }, 00:18:52.750 { 00:18:52.750 "name": "BaseBdev2", 00:18:52.750 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:52.750 "is_configured": true, 00:18:52.750 "data_offset": 2048, 00:18:52.750 "data_size": 63488 00:18:52.750 }, 00:18:52.750 { 00:18:52.750 "name": "BaseBdev3", 00:18:52.750 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:52.750 "is_configured": true, 00:18:52.750 "data_offset": 2048, 00:18:52.750 "data_size": 63488 00:18:52.750 }, 00:18:52.750 { 00:18:52.750 "name": "BaseBdev4", 00:18:52.750 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:52.750 "is_configured": true, 00:18:52.750 "data_offset": 2048, 00:18:52.750 "data_size": 63488 00:18:52.750 } 00:18:52.750 ] 00:18:52.750 }' 00:18:52.750 23:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.750 23:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.315 23:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:53.315 23:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.573 23:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:18:53.573 23:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:53.573 23:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.831 23:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f735d6c7-422c-4b99-82f8-fb877604a49b 00:18:54.088 NewBaseBdev 00:18:54.088 [2024-05-14 23:34:17.162200] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:54.089 [2024-05-14 23:34:17.162380] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:18:54.089 [2024-05-14 23:34:17.162395] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:54.089 [2024-05-14 23:34:17.162478] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:54.089 [2024-05-14 23:34:17.162694] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:18:54.089 [2024-05-14 23:34:17.162738] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:18:54.089 [2024-05-14 23:34:17.162840] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.089 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:18:54.089 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:18:54.089 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:54.089 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:54.089 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:54.089 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:54.089 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:54.089 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:54.347 [ 00:18:54.347 { 00:18:54.347 "name": "NewBaseBdev", 00:18:54.347 "aliases": [ 00:18:54.347 "f735d6c7-422c-4b99-82f8-fb877604a49b" 00:18:54.347 ], 00:18:54.347 "product_name": "Malloc disk", 00:18:54.347 "block_size": 512, 00:18:54.347 "num_blocks": 65536, 00:18:54.347 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:54.347 "assigned_rate_limits": { 00:18:54.347 "rw_ios_per_sec": 0, 00:18:54.347 "rw_mbytes_per_sec": 0, 00:18:54.347 "r_mbytes_per_sec": 0, 00:18:54.347 "w_mbytes_per_sec": 0 00:18:54.347 }, 00:18:54.347 "claimed": true, 00:18:54.347 "claim_type": "exclusive_write", 00:18:54.347 "zoned": false, 00:18:54.347 "supported_io_types": { 00:18:54.347 "read": true, 00:18:54.347 "write": true, 00:18:54.347 "unmap": true, 00:18:54.347 "write_zeroes": true, 00:18:54.347 "flush": true, 00:18:54.347 "reset": true, 00:18:54.347 "compare": false, 00:18:54.347 "compare_and_write": false, 00:18:54.347 "abort": true, 00:18:54.347 "nvme_admin": false, 00:18:54.347 "nvme_io": false 00:18:54.347 }, 00:18:54.347 "memory_domains": [ 00:18:54.347 { 00:18:54.347 "dma_device_id": "system", 00:18:54.347 "dma_device_type": 1 00:18:54.347 }, 00:18:54.347 { 00:18:54.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.347 "dma_device_type": 2 00:18:54.347 } 00:18:54.347 ], 00:18:54.347 "driver_specific": {} 00:18:54.347 } 00:18:54.347 ] 00:18:54.347 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:54.347 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:54.347 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.347 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.348 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.606 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.606 "name": "Existed_Raid", 00:18:54.606 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:54.606 "strip_size_kb": 64, 00:18:54.606 "state": "online", 00:18:54.606 "raid_level": "raid0", 00:18:54.606 "superblock": true, 00:18:54.606 "num_base_bdevs": 4, 00:18:54.606 "num_base_bdevs_discovered": 4, 00:18:54.606 "num_base_bdevs_operational": 4, 00:18:54.606 "base_bdevs_list": [ 00:18:54.606 { 00:18:54.606 "name": "NewBaseBdev", 00:18:54.606 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:54.606 "is_configured": true, 00:18:54.606 "data_offset": 2048, 00:18:54.606 "data_size": 63488 00:18:54.606 }, 00:18:54.606 { 00:18:54.606 "name": "BaseBdev2", 00:18:54.606 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:54.606 "is_configured": true, 00:18:54.606 "data_offset": 2048, 00:18:54.606 "data_size": 63488 00:18:54.606 }, 00:18:54.606 { 00:18:54.606 "name": "BaseBdev3", 00:18:54.606 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:54.606 "is_configured": true, 00:18:54.606 "data_offset": 2048, 00:18:54.606 "data_size": 63488 00:18:54.606 }, 00:18:54.606 { 00:18:54.606 "name": "BaseBdev4", 00:18:54.606 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:54.606 "is_configured": true, 00:18:54.606 "data_offset": 2048, 00:18:54.606 "data_size": 63488 00:18:54.606 } 00:18:54.606 ] 00:18:54.606 }' 00:18:54.606 23:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.606 23:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.541 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:18:55.541 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:18:55.541 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:18:55.542 [2024-05-14 23:34:18.659846] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:18:55.542 "name": "Existed_Raid", 00:18:55.542 "aliases": [ 00:18:55.542 "125e3b38-5175-4d44-b615-ba80dedb0d74" 00:18:55.542 ], 00:18:55.542 "product_name": "Raid Volume", 00:18:55.542 "block_size": 512, 00:18:55.542 "num_blocks": 253952, 00:18:55.542 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:55.542 "assigned_rate_limits": { 00:18:55.542 "rw_ios_per_sec": 0, 00:18:55.542 "rw_mbytes_per_sec": 0, 00:18:55.542 "r_mbytes_per_sec": 0, 00:18:55.542 "w_mbytes_per_sec": 0 00:18:55.542 }, 00:18:55.542 "claimed": false, 00:18:55.542 "zoned": false, 00:18:55.542 "supported_io_types": { 00:18:55.542 "read": true, 00:18:55.542 "write": true, 00:18:55.542 "unmap": true, 00:18:55.542 "write_zeroes": true, 00:18:55.542 "flush": true, 00:18:55.542 "reset": true, 00:18:55.542 "compare": false, 00:18:55.542 "compare_and_write": false, 00:18:55.542 "abort": false, 00:18:55.542 "nvme_admin": false, 00:18:55.542 "nvme_io": false 00:18:55.542 }, 00:18:55.542 "memory_domains": [ 00:18:55.542 { 00:18:55.542 "dma_device_id": "system", 00:18:55.542 "dma_device_type": 1 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.542 "dma_device_type": 2 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "dma_device_id": "system", 00:18:55.542 "dma_device_type": 1 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.542 "dma_device_type": 2 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "dma_device_id": "system", 00:18:55.542 "dma_device_type": 1 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.542 "dma_device_type": 2 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "dma_device_id": "system", 00:18:55.542 "dma_device_type": 1 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.542 "dma_device_type": 2 00:18:55.542 } 00:18:55.542 ], 00:18:55.542 "driver_specific": { 00:18:55.542 "raid": { 00:18:55.542 "uuid": "125e3b38-5175-4d44-b615-ba80dedb0d74", 00:18:55.542 "strip_size_kb": 64, 00:18:55.542 "state": "online", 00:18:55.542 "raid_level": "raid0", 00:18:55.542 "superblock": true, 00:18:55.542 "num_base_bdevs": 4, 00:18:55.542 "num_base_bdevs_discovered": 4, 00:18:55.542 "num_base_bdevs_operational": 4, 00:18:55.542 "base_bdevs_list": [ 00:18:55.542 { 00:18:55.542 "name": "NewBaseBdev", 00:18:55.542 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:55.542 "is_configured": true, 00:18:55.542 "data_offset": 2048, 00:18:55.542 "data_size": 63488 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "name": "BaseBdev2", 00:18:55.542 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:55.542 "is_configured": true, 00:18:55.542 "data_offset": 2048, 00:18:55.542 "data_size": 63488 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "name": "BaseBdev3", 00:18:55.542 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:55.542 "is_configured": true, 00:18:55.542 "data_offset": 2048, 00:18:55.542 "data_size": 63488 00:18:55.542 }, 00:18:55.542 { 00:18:55.542 "name": "BaseBdev4", 00:18:55.542 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:55.542 "is_configured": true, 00:18:55.542 "data_offset": 2048, 00:18:55.542 "data_size": 63488 00:18:55.542 } 00:18:55.542 ] 00:18:55.542 } 00:18:55.542 } 00:18:55.542 }' 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:18:55.542 BaseBdev2 00:18:55.542 BaseBdev3 00:18:55.542 BaseBdev4' 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:55.542 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:55.801 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:55.801 "name": "NewBaseBdev", 00:18:55.801 "aliases": [ 00:18:55.801 "f735d6c7-422c-4b99-82f8-fb877604a49b" 00:18:55.801 ], 00:18:55.801 "product_name": "Malloc disk", 00:18:55.801 "block_size": 512, 00:18:55.801 "num_blocks": 65536, 00:18:55.801 "uuid": "f735d6c7-422c-4b99-82f8-fb877604a49b", 00:18:55.801 "assigned_rate_limits": { 00:18:55.801 "rw_ios_per_sec": 0, 00:18:55.801 "rw_mbytes_per_sec": 0, 00:18:55.801 "r_mbytes_per_sec": 0, 00:18:55.801 "w_mbytes_per_sec": 0 00:18:55.801 }, 00:18:55.801 "claimed": true, 00:18:55.801 "claim_type": "exclusive_write", 00:18:55.801 "zoned": false, 00:18:55.801 "supported_io_types": { 00:18:55.801 "read": true, 00:18:55.801 "write": true, 00:18:55.801 "unmap": true, 00:18:55.801 "write_zeroes": true, 00:18:55.801 "flush": true, 00:18:55.801 "reset": true, 00:18:55.801 "compare": false, 00:18:55.801 "compare_and_write": false, 00:18:55.801 "abort": true, 00:18:55.801 "nvme_admin": false, 00:18:55.801 "nvme_io": false 00:18:55.801 }, 00:18:55.801 "memory_domains": [ 00:18:55.801 { 00:18:55.801 "dma_device_id": "system", 00:18:55.801 "dma_device_type": 1 00:18:55.801 }, 00:18:55.801 { 00:18:55.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.801 "dma_device_type": 2 00:18:55.801 } 00:18:55.801 ], 00:18:55.801 "driver_specific": {} 00:18:55.801 }' 00:18:55.801 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:55.801 23:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:55.801 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:55.801 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:56.060 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:56.060 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:56.060 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:56.060 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:56.060 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:56.060 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:56.060 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:56.319 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:56.319 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:56.319 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:56.319 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:56.319 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:56.319 "name": "BaseBdev2", 00:18:56.319 "aliases": [ 00:18:56.319 "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6" 00:18:56.319 ], 00:18:56.319 "product_name": "Malloc disk", 00:18:56.319 "block_size": 512, 00:18:56.319 "num_blocks": 65536, 00:18:56.319 "uuid": "a9fe4ea2-cdb3-45a1-9c7b-f221a829b2a6", 00:18:56.319 "assigned_rate_limits": { 00:18:56.319 "rw_ios_per_sec": 0, 00:18:56.319 "rw_mbytes_per_sec": 0, 00:18:56.319 "r_mbytes_per_sec": 0, 00:18:56.319 "w_mbytes_per_sec": 0 00:18:56.319 }, 00:18:56.319 "claimed": true, 00:18:56.319 "claim_type": "exclusive_write", 00:18:56.319 "zoned": false, 00:18:56.319 "supported_io_types": { 00:18:56.319 "read": true, 00:18:56.319 "write": true, 00:18:56.319 "unmap": true, 00:18:56.319 "write_zeroes": true, 00:18:56.319 "flush": true, 00:18:56.319 "reset": true, 00:18:56.319 "compare": false, 00:18:56.319 "compare_and_write": false, 00:18:56.319 "abort": true, 00:18:56.319 "nvme_admin": false, 00:18:56.319 "nvme_io": false 00:18:56.319 }, 00:18:56.319 "memory_domains": [ 00:18:56.319 { 00:18:56.319 "dma_device_id": "system", 00:18:56.319 "dma_device_type": 1 00:18:56.319 }, 00:18:56.319 { 00:18:56.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.319 "dma_device_type": 2 00:18:56.319 } 00:18:56.319 ], 00:18:56.319 "driver_specific": {} 00:18:56.319 }' 00:18:56.319 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:56.671 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:56.931 23:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:56.931 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:56.931 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:56.931 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:56.931 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:57.191 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:57.191 "name": "BaseBdev3", 00:18:57.191 "aliases": [ 00:18:57.191 "285f1bdc-1eae-4295-9702-2b4ba305d0a2" 00:18:57.191 ], 00:18:57.191 "product_name": "Malloc disk", 00:18:57.191 "block_size": 512, 00:18:57.191 "num_blocks": 65536, 00:18:57.191 "uuid": "285f1bdc-1eae-4295-9702-2b4ba305d0a2", 00:18:57.191 "assigned_rate_limits": { 00:18:57.191 "rw_ios_per_sec": 0, 00:18:57.191 "rw_mbytes_per_sec": 0, 00:18:57.191 "r_mbytes_per_sec": 0, 00:18:57.191 "w_mbytes_per_sec": 0 00:18:57.191 }, 00:18:57.191 "claimed": true, 00:18:57.191 "claim_type": "exclusive_write", 00:18:57.191 "zoned": false, 00:18:57.191 "supported_io_types": { 00:18:57.191 "read": true, 00:18:57.191 "write": true, 00:18:57.191 "unmap": true, 00:18:57.191 "write_zeroes": true, 00:18:57.191 "flush": true, 00:18:57.191 "reset": true, 00:18:57.191 "compare": false, 00:18:57.191 "compare_and_write": false, 00:18:57.191 "abort": true, 00:18:57.191 "nvme_admin": false, 00:18:57.191 "nvme_io": false 00:18:57.191 }, 00:18:57.191 "memory_domains": [ 00:18:57.191 { 00:18:57.191 "dma_device_id": "system", 00:18:57.191 "dma_device_type": 1 00:18:57.191 }, 00:18:57.191 { 00:18:57.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.191 "dma_device_type": 2 00:18:57.191 } 00:18:57.191 ], 00:18:57.191 "driver_specific": {} 00:18:57.191 }' 00:18:57.191 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:57.191 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:57.191 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:57.191 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:57.191 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:57.191 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:57.191 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:57.449 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:57.449 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:57.449 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:57.449 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:57.449 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:57.449 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:18:57.449 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:18:57.449 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:57.708 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:18:57.708 "name": "BaseBdev4", 00:18:57.708 "aliases": [ 00:18:57.708 "7cda21c8-f658-42fb-93e0-8ec43e8dfa97" 00:18:57.708 ], 00:18:57.708 "product_name": "Malloc disk", 00:18:57.708 "block_size": 512, 00:18:57.708 "num_blocks": 65536, 00:18:57.708 "uuid": "7cda21c8-f658-42fb-93e0-8ec43e8dfa97", 00:18:57.708 "assigned_rate_limits": { 00:18:57.708 "rw_ios_per_sec": 0, 00:18:57.708 "rw_mbytes_per_sec": 0, 00:18:57.708 "r_mbytes_per_sec": 0, 00:18:57.708 "w_mbytes_per_sec": 0 00:18:57.708 }, 00:18:57.708 "claimed": true, 00:18:57.708 "claim_type": "exclusive_write", 00:18:57.708 "zoned": false, 00:18:57.708 "supported_io_types": { 00:18:57.708 "read": true, 00:18:57.708 "write": true, 00:18:57.708 "unmap": true, 00:18:57.708 "write_zeroes": true, 00:18:57.708 "flush": true, 00:18:57.708 "reset": true, 00:18:57.708 "compare": false, 00:18:57.708 "compare_and_write": false, 00:18:57.708 "abort": true, 00:18:57.708 "nvme_admin": false, 00:18:57.708 "nvme_io": false 00:18:57.708 }, 00:18:57.708 "memory_domains": [ 00:18:57.708 { 00:18:57.708 "dma_device_id": "system", 00:18:57.708 "dma_device_type": 1 00:18:57.708 }, 00:18:57.708 { 00:18:57.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.708 "dma_device_type": 2 00:18:57.708 } 00:18:57.708 ], 00:18:57.708 "driver_specific": {} 00:18:57.708 }' 00:18:57.708 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:57.708 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:18:57.967 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:18:57.967 23:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:57.967 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:18:57.967 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:57.967 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:57.967 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:18:57.967 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:57.967 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:58.226 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:18:58.226 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:18:58.226 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:58.484 [2024-05-14 23:34:21.530838] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.484 [2024-05-14 23:34:21.530877] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.484 [2024-05-14 23:34:21.530945] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.484 [2024-05-14 23:34:21.530992] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.484 [2024-05-14 23:34:21.531004] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 65316 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 65316 ']' 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 65316 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65316 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:58.484 killing process with pid 65316 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65316' 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 65316 00:18:58.484 23:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 65316 00:18:58.484 [2024-05-14 23:34:21.564511] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:58.742 [2024-05-14 23:34:21.894233] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.115 ************************************ 00:19:00.115 END TEST raid_state_function_test_sb 00:19:00.115 ************************************ 00:19:00.115 23:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:19:00.115 00:19:00.115 real 0m33.625s 00:19:00.115 user 1m3.396s 00:19:00.115 sys 0m3.297s 00:19:00.115 23:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:00.116 23:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.116 23:34:23 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:19:00.116 23:34:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:00.116 23:34:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:00.116 23:34:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.116 ************************************ 00:19:00.116 START TEST raid_superblock_test 00:19:00.116 ************************************ 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66423 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66423 /var/tmp/spdk-raid.sock 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 66423 ']' 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:00.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:00.116 23:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.116 [2024-05-14 23:34:23.298683] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:19:00.116 [2024-05-14 23:34:23.298913] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66423 ] 00:19:00.373 [2024-05-14 23:34:23.461793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.630 [2024-05-14 23:34:23.698504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.630 [2024-05-14 23:34:23.908522] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.887 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:01.145 malloc1 00:19:01.145 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.402 [2024-05-14 23:34:24.532196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.402 [2024-05-14 23:34:24.532306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.402 [2024-05-14 23:34:24.532361] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:19:01.402 [2024-05-14 23:34:24.532406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.402 [2024-05-14 23:34:24.534780] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.402 [2024-05-14 23:34:24.534823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.402 pt1 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.402 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:01.762 malloc2 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.762 [2024-05-14 23:34:24.943221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.762 [2024-05-14 23:34:24.943301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.762 [2024-05-14 23:34:24.943350] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:19:01.762 [2024-05-14 23:34:24.943423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.762 [2024-05-14 23:34:24.945206] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.762 [2024-05-14 23:34:24.945260] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.762 pt2 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.762 23:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:02.019 malloc3 00:19:02.019 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:02.277 [2024-05-14 23:34:25.375817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:02.277 [2024-05-14 23:34:25.375916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.277 [2024-05-14 23:34:25.375980] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:19:02.277 [2024-05-14 23:34:25.376027] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.277 [2024-05-14 23:34:25.377956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.277 [2024-05-14 23:34:25.378018] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:02.277 pt3 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.277 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:02.534 malloc4 00:19:02.534 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:02.534 [2024-05-14 23:34:25.817813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:02.534 [2024-05-14 23:34:25.817925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.534 [2024-05-14 23:34:25.817976] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:19:02.534 [2024-05-14 23:34:25.818032] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.534 [2024-05-14 23:34:25.820037] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.534 [2024-05-14 23:34:25.820096] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:02.792 pt4 00:19:02.792 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.792 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.792 23:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:02.792 [2024-05-14 23:34:26.005866] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.792 [2024-05-14 23:34:26.008326] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.792 [2024-05-14 23:34:26.008443] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:02.792 [2024-05-14 23:34:26.008572] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:02.792 [2024-05-14 23:34:26.008876] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:19:02.792 [2024-05-14 23:34:26.008911] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:02.792 [2024-05-14 23:34:26.009186] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:02.792 [2024-05-14 23:34:26.009690] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:19:02.792 [2024-05-14 23:34:26.009726] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:19:02.792 [2024-05-14 23:34:26.010036] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.792 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.050 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.050 "name": "raid_bdev1", 00:19:03.050 "uuid": "d4f541f7-51d2-423f-a7d8-41c04692b75b", 00:19:03.050 "strip_size_kb": 64, 00:19:03.050 "state": "online", 00:19:03.050 "raid_level": "raid0", 00:19:03.050 "superblock": true, 00:19:03.050 "num_base_bdevs": 4, 00:19:03.050 "num_base_bdevs_discovered": 4, 00:19:03.050 "num_base_bdevs_operational": 4, 00:19:03.050 "base_bdevs_list": [ 00:19:03.050 { 00:19:03.050 "name": "pt1", 00:19:03.050 "uuid": "cff71c90-835c-54cd-a9a4-ce088a32a8e4", 00:19:03.050 "is_configured": true, 00:19:03.050 "data_offset": 2048, 00:19:03.050 "data_size": 63488 00:19:03.050 }, 00:19:03.050 { 00:19:03.050 "name": "pt2", 00:19:03.050 "uuid": "37314f72-0be4-548b-b602-b88827c46a31", 00:19:03.050 "is_configured": true, 00:19:03.050 "data_offset": 2048, 00:19:03.050 "data_size": 63488 00:19:03.050 }, 00:19:03.050 { 00:19:03.050 "name": "pt3", 00:19:03.050 "uuid": "cc6f5c45-2b05-5e1d-994c-a077d3408f40", 00:19:03.050 "is_configured": true, 00:19:03.050 "data_offset": 2048, 00:19:03.050 "data_size": 63488 00:19:03.050 }, 00:19:03.050 { 00:19:03.050 "name": "pt4", 00:19:03.050 "uuid": "2e12da70-ef6c-5e25-a802-76825374493a", 00:19:03.050 "is_configured": true, 00:19:03.050 "data_offset": 2048, 00:19:03.050 "data_size": 63488 00:19:03.050 } 00:19:03.050 ] 00:19:03.050 }' 00:19:03.050 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.050 23:34:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.615 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:03.615 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:03.615 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:03.615 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:03.615 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:03.615 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:03.871 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:03.871 23:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:03.871 [2024-05-14 23:34:27.122257] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.871 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:03.871 "name": "raid_bdev1", 00:19:03.871 "aliases": [ 00:19:03.871 "d4f541f7-51d2-423f-a7d8-41c04692b75b" 00:19:03.871 ], 00:19:03.871 "product_name": "Raid Volume", 00:19:03.871 "block_size": 512, 00:19:03.871 "num_blocks": 253952, 00:19:03.871 "uuid": "d4f541f7-51d2-423f-a7d8-41c04692b75b", 00:19:03.871 "assigned_rate_limits": { 00:19:03.871 "rw_ios_per_sec": 0, 00:19:03.871 "rw_mbytes_per_sec": 0, 00:19:03.871 "r_mbytes_per_sec": 0, 00:19:03.871 "w_mbytes_per_sec": 0 00:19:03.871 }, 00:19:03.871 "claimed": false, 00:19:03.871 "zoned": false, 00:19:03.871 "supported_io_types": { 00:19:03.871 "read": true, 00:19:03.871 "write": true, 00:19:03.871 "unmap": true, 00:19:03.871 "write_zeroes": true, 00:19:03.871 "flush": true, 00:19:03.871 "reset": true, 00:19:03.871 "compare": false, 00:19:03.871 "compare_and_write": false, 00:19:03.872 "abort": false, 00:19:03.872 "nvme_admin": false, 00:19:03.872 "nvme_io": false 00:19:03.872 }, 00:19:03.872 "memory_domains": [ 00:19:03.872 { 00:19:03.872 "dma_device_id": "system", 00:19:03.872 "dma_device_type": 1 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.872 "dma_device_type": 2 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "dma_device_id": "system", 00:19:03.872 "dma_device_type": 1 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.872 "dma_device_type": 2 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "dma_device_id": "system", 00:19:03.872 "dma_device_type": 1 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.872 "dma_device_type": 2 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "dma_device_id": "system", 00:19:03.872 "dma_device_type": 1 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.872 "dma_device_type": 2 00:19:03.872 } 00:19:03.872 ], 00:19:03.872 "driver_specific": { 00:19:03.872 "raid": { 00:19:03.872 "uuid": "d4f541f7-51d2-423f-a7d8-41c04692b75b", 00:19:03.872 "strip_size_kb": 64, 00:19:03.872 "state": "online", 00:19:03.872 "raid_level": "raid0", 00:19:03.872 "superblock": true, 00:19:03.872 "num_base_bdevs": 4, 00:19:03.872 "num_base_bdevs_discovered": 4, 00:19:03.872 "num_base_bdevs_operational": 4, 00:19:03.872 "base_bdevs_list": [ 00:19:03.872 { 00:19:03.872 "name": "pt1", 00:19:03.872 "uuid": "cff71c90-835c-54cd-a9a4-ce088a32a8e4", 00:19:03.872 "is_configured": true, 00:19:03.872 "data_offset": 2048, 00:19:03.872 "data_size": 63488 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "name": "pt2", 00:19:03.872 "uuid": "37314f72-0be4-548b-b602-b88827c46a31", 00:19:03.872 "is_configured": true, 00:19:03.872 "data_offset": 2048, 00:19:03.872 "data_size": 63488 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "name": "pt3", 00:19:03.872 "uuid": "cc6f5c45-2b05-5e1d-994c-a077d3408f40", 00:19:03.872 "is_configured": true, 00:19:03.872 "data_offset": 2048, 00:19:03.872 "data_size": 63488 00:19:03.872 }, 00:19:03.872 { 00:19:03.872 "name": "pt4", 00:19:03.872 "uuid": "2e12da70-ef6c-5e25-a802-76825374493a", 00:19:03.872 "is_configured": true, 00:19:03.872 "data_offset": 2048, 00:19:03.872 "data_size": 63488 00:19:03.872 } 00:19:03.872 ] 00:19:03.872 } 00:19:03.872 } 00:19:03.872 }' 00:19:03.872 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.130 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:04.130 pt2 00:19:04.130 pt3 00:19:04.130 pt4' 00:19:04.130 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:04.130 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:04.130 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:04.130 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:04.130 "name": "pt1", 00:19:04.130 "aliases": [ 00:19:04.130 "cff71c90-835c-54cd-a9a4-ce088a32a8e4" 00:19:04.130 ], 00:19:04.130 "product_name": "passthru", 00:19:04.130 "block_size": 512, 00:19:04.130 "num_blocks": 65536, 00:19:04.130 "uuid": "cff71c90-835c-54cd-a9a4-ce088a32a8e4", 00:19:04.130 "assigned_rate_limits": { 00:19:04.130 "rw_ios_per_sec": 0, 00:19:04.130 "rw_mbytes_per_sec": 0, 00:19:04.130 "r_mbytes_per_sec": 0, 00:19:04.130 "w_mbytes_per_sec": 0 00:19:04.130 }, 00:19:04.130 "claimed": true, 00:19:04.130 "claim_type": "exclusive_write", 00:19:04.130 "zoned": false, 00:19:04.130 "supported_io_types": { 00:19:04.130 "read": true, 00:19:04.130 "write": true, 00:19:04.130 "unmap": true, 00:19:04.130 "write_zeroes": true, 00:19:04.130 "flush": true, 00:19:04.130 "reset": true, 00:19:04.130 "compare": false, 00:19:04.130 "compare_and_write": false, 00:19:04.130 "abort": true, 00:19:04.130 "nvme_admin": false, 00:19:04.130 "nvme_io": false 00:19:04.130 }, 00:19:04.130 "memory_domains": [ 00:19:04.130 { 00:19:04.130 "dma_device_id": "system", 00:19:04.130 "dma_device_type": 1 00:19:04.130 }, 00:19:04.130 { 00:19:04.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.130 "dma_device_type": 2 00:19:04.130 } 00:19:04.130 ], 00:19:04.130 "driver_specific": { 00:19:04.130 "passthru": { 00:19:04.130 "name": "pt1", 00:19:04.130 "base_bdev_name": "malloc1" 00:19:04.130 } 00:19:04.130 } 00:19:04.130 }' 00:19:04.130 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:04.389 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:04.389 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:04.389 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:04.389 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:04.389 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:04.389 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:04.389 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:04.646 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:04.646 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:04.646 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:04.646 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:04.646 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:04.646 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:04.646 23:34:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:04.927 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:04.927 "name": "pt2", 00:19:04.927 "aliases": [ 00:19:04.927 "37314f72-0be4-548b-b602-b88827c46a31" 00:19:04.927 ], 00:19:04.927 "product_name": "passthru", 00:19:04.927 "block_size": 512, 00:19:04.927 "num_blocks": 65536, 00:19:04.927 "uuid": "37314f72-0be4-548b-b602-b88827c46a31", 00:19:04.927 "assigned_rate_limits": { 00:19:04.927 "rw_ios_per_sec": 0, 00:19:04.927 "rw_mbytes_per_sec": 0, 00:19:04.927 "r_mbytes_per_sec": 0, 00:19:04.927 "w_mbytes_per_sec": 0 00:19:04.927 }, 00:19:04.927 "claimed": true, 00:19:04.927 "claim_type": "exclusive_write", 00:19:04.927 "zoned": false, 00:19:04.927 "supported_io_types": { 00:19:04.927 "read": true, 00:19:04.927 "write": true, 00:19:04.927 "unmap": true, 00:19:04.927 "write_zeroes": true, 00:19:04.927 "flush": true, 00:19:04.927 "reset": true, 00:19:04.927 "compare": false, 00:19:04.927 "compare_and_write": false, 00:19:04.927 "abort": true, 00:19:04.927 "nvme_admin": false, 00:19:04.927 "nvme_io": false 00:19:04.927 }, 00:19:04.927 "memory_domains": [ 00:19:04.927 { 00:19:04.927 "dma_device_id": "system", 00:19:04.927 "dma_device_type": 1 00:19:04.927 }, 00:19:04.927 { 00:19:04.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.927 "dma_device_type": 2 00:19:04.927 } 00:19:04.927 ], 00:19:04.927 "driver_specific": { 00:19:04.927 "passthru": { 00:19:04.927 "name": "pt2", 00:19:04.927 "base_bdev_name": "malloc2" 00:19:04.927 } 00:19:04.927 } 00:19:04.927 }' 00:19:04.927 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:04.927 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:04.927 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:04.927 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:04.927 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:05.207 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:05.465 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:05.465 "name": "pt3", 00:19:05.465 "aliases": [ 00:19:05.465 "cc6f5c45-2b05-5e1d-994c-a077d3408f40" 00:19:05.465 ], 00:19:05.465 "product_name": "passthru", 00:19:05.465 "block_size": 512, 00:19:05.465 "num_blocks": 65536, 00:19:05.465 "uuid": "cc6f5c45-2b05-5e1d-994c-a077d3408f40", 00:19:05.465 "assigned_rate_limits": { 00:19:05.465 "rw_ios_per_sec": 0, 00:19:05.465 "rw_mbytes_per_sec": 0, 00:19:05.465 "r_mbytes_per_sec": 0, 00:19:05.465 "w_mbytes_per_sec": 0 00:19:05.465 }, 00:19:05.465 "claimed": true, 00:19:05.465 "claim_type": "exclusive_write", 00:19:05.465 "zoned": false, 00:19:05.465 "supported_io_types": { 00:19:05.465 "read": true, 00:19:05.465 "write": true, 00:19:05.465 "unmap": true, 00:19:05.465 "write_zeroes": true, 00:19:05.465 "flush": true, 00:19:05.465 "reset": true, 00:19:05.465 "compare": false, 00:19:05.465 "compare_and_write": false, 00:19:05.465 "abort": true, 00:19:05.465 "nvme_admin": false, 00:19:05.465 "nvme_io": false 00:19:05.465 }, 00:19:05.465 "memory_domains": [ 00:19:05.465 { 00:19:05.465 "dma_device_id": "system", 00:19:05.465 "dma_device_type": 1 00:19:05.465 }, 00:19:05.465 { 00:19:05.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.465 "dma_device_type": 2 00:19:05.465 } 00:19:05.465 ], 00:19:05.465 "driver_specific": { 00:19:05.465 "passthru": { 00:19:05.465 "name": "pt3", 00:19:05.465 "base_bdev_name": "malloc3" 00:19:05.465 } 00:19:05.465 } 00:19:05.465 }' 00:19:05.465 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:05.465 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:05.724 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:05.724 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:05.724 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:05.724 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.724 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:05.724 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:05.724 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.724 23:34:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:05.982 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:05.982 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:05.982 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:05.982 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:05.982 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:06.240 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:06.240 "name": "pt4", 00:19:06.240 "aliases": [ 00:19:06.240 "2e12da70-ef6c-5e25-a802-76825374493a" 00:19:06.240 ], 00:19:06.240 "product_name": "passthru", 00:19:06.240 "block_size": 512, 00:19:06.240 "num_blocks": 65536, 00:19:06.240 "uuid": "2e12da70-ef6c-5e25-a802-76825374493a", 00:19:06.241 "assigned_rate_limits": { 00:19:06.241 "rw_ios_per_sec": 0, 00:19:06.241 "rw_mbytes_per_sec": 0, 00:19:06.241 "r_mbytes_per_sec": 0, 00:19:06.241 "w_mbytes_per_sec": 0 00:19:06.241 }, 00:19:06.241 "claimed": true, 00:19:06.241 "claim_type": "exclusive_write", 00:19:06.241 "zoned": false, 00:19:06.241 "supported_io_types": { 00:19:06.241 "read": true, 00:19:06.241 "write": true, 00:19:06.241 "unmap": true, 00:19:06.241 "write_zeroes": true, 00:19:06.241 "flush": true, 00:19:06.241 "reset": true, 00:19:06.241 "compare": false, 00:19:06.241 "compare_and_write": false, 00:19:06.241 "abort": true, 00:19:06.241 "nvme_admin": false, 00:19:06.241 "nvme_io": false 00:19:06.241 }, 00:19:06.241 "memory_domains": [ 00:19:06.241 { 00:19:06.241 "dma_device_id": "system", 00:19:06.241 "dma_device_type": 1 00:19:06.241 }, 00:19:06.241 { 00:19:06.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.241 "dma_device_type": 2 00:19:06.241 } 00:19:06.241 ], 00:19:06.241 "driver_specific": { 00:19:06.241 "passthru": { 00:19:06.241 "name": "pt4", 00:19:06.241 "base_bdev_name": "malloc4" 00:19:06.241 } 00:19:06.241 } 00:19:06.241 }' 00:19:06.241 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:06.241 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:06.241 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:06.241 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:06.241 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:06.499 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:06.757 [2024-05-14 23:34:29.910767] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.757 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d4f541f7-51d2-423f-a7d8-41c04692b75b 00:19:06.757 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d4f541f7-51d2-423f-a7d8-41c04692b75b ']' 00:19:06.757 23:34:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:07.015 [2024-05-14 23:34:30.114505] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.016 [2024-05-14 23:34:30.114545] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.016 [2024-05-14 23:34:30.114631] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.016 [2024-05-14 23:34:30.114675] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.016 [2024-05-14 23:34:30.114685] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:19:07.016 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.016 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:07.273 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:07.273 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:07.273 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.273 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:07.532 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.532 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:07.532 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.532 23:34:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:07.789 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:07.789 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:08.047 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:08.047 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:08.305 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:08.566 [2024-05-14 23:34:31.634822] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:08.566 [2024-05-14 23:34:31.636527] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:08.566 [2024-05-14 23:34:31.636575] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:08.566 [2024-05-14 23:34:31.636618] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:08.566 [2024-05-14 23:34:31.636651] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:08.566 [2024-05-14 23:34:31.636730] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:08.566 [2024-05-14 23:34:31.636762] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:08.566 [2024-05-14 23:34:31.636813] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:08.566 [2024-05-14 23:34:31.636845] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.566 [2024-05-14 23:34:31.636856] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:19:08.566 request: 00:19:08.566 { 00:19:08.566 "name": "raid_bdev1", 00:19:08.566 "raid_level": "raid0", 00:19:08.566 "base_bdevs": [ 00:19:08.566 "malloc1", 00:19:08.566 "malloc2", 00:19:08.566 "malloc3", 00:19:08.566 "malloc4" 00:19:08.566 ], 00:19:08.566 "superblock": false, 00:19:08.566 "strip_size_kb": 64, 00:19:08.566 "method": "bdev_raid_create", 00:19:08.566 "req_id": 1 00:19:08.566 } 00:19:08.566 Got JSON-RPC error response 00:19:08.566 response: 00:19:08.566 { 00:19:08.566 "code": -17, 00:19:08.566 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:08.566 } 00:19:08.566 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:08.566 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.566 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.566 23:34:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.566 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.566 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:08.825 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:08.825 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:08.825 23:34:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:09.084 [2024-05-14 23:34:32.122821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:09.084 [2024-05-14 23:34:32.122918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.084 [2024-05-14 23:34:32.122971] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f780 00:19:09.084 [2024-05-14 23:34:32.123028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.084 [2024-05-14 23:34:32.124864] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.084 [2024-05-14 23:34:32.124924] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:09.084 [2024-05-14 23:34:32.125022] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:09.084 [2024-05-14 23:34:32.125087] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:09.084 pt1 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.084 "name": "raid_bdev1", 00:19:09.084 "uuid": "d4f541f7-51d2-423f-a7d8-41c04692b75b", 00:19:09.084 "strip_size_kb": 64, 00:19:09.084 "state": "configuring", 00:19:09.084 "raid_level": "raid0", 00:19:09.084 "superblock": true, 00:19:09.084 "num_base_bdevs": 4, 00:19:09.084 "num_base_bdevs_discovered": 1, 00:19:09.084 "num_base_bdevs_operational": 4, 00:19:09.084 "base_bdevs_list": [ 00:19:09.084 { 00:19:09.084 "name": "pt1", 00:19:09.084 "uuid": "cff71c90-835c-54cd-a9a4-ce088a32a8e4", 00:19:09.084 "is_configured": true, 00:19:09.084 "data_offset": 2048, 00:19:09.084 "data_size": 63488 00:19:09.084 }, 00:19:09.084 { 00:19:09.084 "name": null, 00:19:09.084 "uuid": "37314f72-0be4-548b-b602-b88827c46a31", 00:19:09.084 "is_configured": false, 00:19:09.084 "data_offset": 2048, 00:19:09.084 "data_size": 63488 00:19:09.084 }, 00:19:09.084 { 00:19:09.084 "name": null, 00:19:09.084 "uuid": "cc6f5c45-2b05-5e1d-994c-a077d3408f40", 00:19:09.084 "is_configured": false, 00:19:09.084 "data_offset": 2048, 00:19:09.084 "data_size": 63488 00:19:09.084 }, 00:19:09.084 { 00:19:09.084 "name": null, 00:19:09.084 "uuid": "2e12da70-ef6c-5e25-a802-76825374493a", 00:19:09.084 "is_configured": false, 00:19:09.084 "data_offset": 2048, 00:19:09.084 "data_size": 63488 00:19:09.084 } 00:19:09.084 ] 00:19:09.084 }' 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.084 23:34:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.019 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:10.019 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:10.019 [2024-05-14 23:34:33.254968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:10.019 [2024-05-14 23:34:33.255060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.019 [2024-05-14 23:34:33.255114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:19:10.019 [2024-05-14 23:34:33.255139] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.019 [2024-05-14 23:34:33.255522] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.019 [2024-05-14 23:34:33.255571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:10.019 [2024-05-14 23:34:33.255663] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:10.019 [2024-05-14 23:34:33.255690] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:10.019 pt2 00:19:10.019 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:10.277 [2024-05-14 23:34:33.503044] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.277 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.536 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.536 "name": "raid_bdev1", 00:19:10.536 "uuid": "d4f541f7-51d2-423f-a7d8-41c04692b75b", 00:19:10.536 "strip_size_kb": 64, 00:19:10.536 "state": "configuring", 00:19:10.536 "raid_level": "raid0", 00:19:10.536 "superblock": true, 00:19:10.536 "num_base_bdevs": 4, 00:19:10.536 "num_base_bdevs_discovered": 1, 00:19:10.536 "num_base_bdevs_operational": 4, 00:19:10.536 "base_bdevs_list": [ 00:19:10.536 { 00:19:10.536 "name": "pt1", 00:19:10.536 "uuid": "cff71c90-835c-54cd-a9a4-ce088a32a8e4", 00:19:10.536 "is_configured": true, 00:19:10.536 "data_offset": 2048, 00:19:10.536 "data_size": 63488 00:19:10.536 }, 00:19:10.536 { 00:19:10.536 "name": null, 00:19:10.536 "uuid": "37314f72-0be4-548b-b602-b88827c46a31", 00:19:10.536 "is_configured": false, 00:19:10.536 "data_offset": 2048, 00:19:10.536 "data_size": 63488 00:19:10.536 }, 00:19:10.536 { 00:19:10.536 "name": null, 00:19:10.536 "uuid": "cc6f5c45-2b05-5e1d-994c-a077d3408f40", 00:19:10.536 "is_configured": false, 00:19:10.536 "data_offset": 2048, 00:19:10.536 "data_size": 63488 00:19:10.536 }, 00:19:10.536 { 00:19:10.536 "name": null, 00:19:10.536 "uuid": "2e12da70-ef6c-5e25-a802-76825374493a", 00:19:10.536 "is_configured": false, 00:19:10.536 "data_offset": 2048, 00:19:10.536 "data_size": 63488 00:19:10.536 } 00:19:10.536 ] 00:19:10.536 }' 00:19:10.536 23:34:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.536 23:34:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.470 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:11.470 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.470 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:11.470 [2024-05-14 23:34:34.619193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:11.470 [2024-05-14 23:34:34.619289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.470 [2024-05-14 23:34:34.619340] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032780 00:19:11.470 [2024-05-14 23:34:34.619365] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.470 [2024-05-14 23:34:34.619747] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.470 [2024-05-14 23:34:34.619798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:11.470 [2024-05-14 23:34:34.619887] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:11.470 [2024-05-14 23:34:34.619913] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:11.470 pt2 00:19:11.470 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:11.470 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.470 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:11.728 [2024-05-14 23:34:34.867201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:11.728 [2024-05-14 23:34:34.867310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.728 [2024-05-14 23:34:34.867359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033c80 00:19:11.728 [2024-05-14 23:34:34.867394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.728 [2024-05-14 23:34:34.867754] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.728 [2024-05-14 23:34:34.867803] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:11.728 [2024-05-14 23:34:34.867891] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:11.728 [2024-05-14 23:34:34.867915] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:11.728 pt3 00:19:11.728 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:11.728 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.728 23:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:11.987 [2024-05-14 23:34:35.123244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:11.987 [2024-05-14 23:34:35.123342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.987 [2024-05-14 23:34:35.123386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:19:11.987 [2024-05-14 23:34:35.123416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.987 [2024-05-14 23:34:35.123761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.987 [2024-05-14 23:34:35.123811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:11.987 [2024-05-14 23:34:35.123901] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:11.987 [2024-05-14 23:34:35.123928] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:11.987 [2024-05-14 23:34:35.124019] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:19:11.987 [2024-05-14 23:34:35.124031] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:11.987 [2024-05-14 23:34:35.124105] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:11.987 [2024-05-14 23:34:35.124343] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:19:11.987 [2024-05-14 23:34:35.124359] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:19:11.987 [2024-05-14 23:34:35.124453] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.987 pt4 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.987 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.245 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.245 "name": "raid_bdev1", 00:19:12.245 "uuid": "d4f541f7-51d2-423f-a7d8-41c04692b75b", 00:19:12.245 "strip_size_kb": 64, 00:19:12.245 "state": "online", 00:19:12.245 "raid_level": "raid0", 00:19:12.245 "superblock": true, 00:19:12.245 "num_base_bdevs": 4, 00:19:12.245 "num_base_bdevs_discovered": 4, 00:19:12.245 "num_base_bdevs_operational": 4, 00:19:12.245 "base_bdevs_list": [ 00:19:12.245 { 00:19:12.245 "name": "pt1", 00:19:12.245 "uuid": "cff71c90-835c-54cd-a9a4-ce088a32a8e4", 00:19:12.245 "is_configured": true, 00:19:12.245 "data_offset": 2048, 00:19:12.245 "data_size": 63488 00:19:12.245 }, 00:19:12.245 { 00:19:12.245 "name": "pt2", 00:19:12.245 "uuid": "37314f72-0be4-548b-b602-b88827c46a31", 00:19:12.245 "is_configured": true, 00:19:12.245 "data_offset": 2048, 00:19:12.245 "data_size": 63488 00:19:12.245 }, 00:19:12.245 { 00:19:12.245 "name": "pt3", 00:19:12.245 "uuid": "cc6f5c45-2b05-5e1d-994c-a077d3408f40", 00:19:12.245 "is_configured": true, 00:19:12.245 "data_offset": 2048, 00:19:12.245 "data_size": 63488 00:19:12.245 }, 00:19:12.245 { 00:19:12.245 "name": "pt4", 00:19:12.245 "uuid": "2e12da70-ef6c-5e25-a802-76825374493a", 00:19:12.245 "is_configured": true, 00:19:12.245 "data_offset": 2048, 00:19:12.245 "data_size": 63488 00:19:12.245 } 00:19:12.245 ] 00:19:12.245 }' 00:19:12.245 23:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.245 23:34:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.866 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:12.866 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:12.866 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:12.866 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:12.866 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:12.866 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:12.866 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:12.866 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:13.125 [2024-05-14 23:34:36.343588] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.125 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:13.125 "name": "raid_bdev1", 00:19:13.125 "aliases": [ 00:19:13.125 "d4f541f7-51d2-423f-a7d8-41c04692b75b" 00:19:13.125 ], 00:19:13.125 "product_name": "Raid Volume", 00:19:13.125 "block_size": 512, 00:19:13.125 "num_blocks": 253952, 00:19:13.125 "uuid": "d4f541f7-51d2-423f-a7d8-41c04692b75b", 00:19:13.125 "assigned_rate_limits": { 00:19:13.125 "rw_ios_per_sec": 0, 00:19:13.125 "rw_mbytes_per_sec": 0, 00:19:13.125 "r_mbytes_per_sec": 0, 00:19:13.125 "w_mbytes_per_sec": 0 00:19:13.125 }, 00:19:13.125 "claimed": false, 00:19:13.125 "zoned": false, 00:19:13.125 "supported_io_types": { 00:19:13.125 "read": true, 00:19:13.125 "write": true, 00:19:13.125 "unmap": true, 00:19:13.125 "write_zeroes": true, 00:19:13.125 "flush": true, 00:19:13.125 "reset": true, 00:19:13.125 "compare": false, 00:19:13.125 "compare_and_write": false, 00:19:13.125 "abort": false, 00:19:13.125 "nvme_admin": false, 00:19:13.125 "nvme_io": false 00:19:13.125 }, 00:19:13.125 "memory_domains": [ 00:19:13.125 { 00:19:13.125 "dma_device_id": "system", 00:19:13.125 "dma_device_type": 1 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.125 "dma_device_type": 2 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "dma_device_id": "system", 00:19:13.125 "dma_device_type": 1 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.125 "dma_device_type": 2 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "dma_device_id": "system", 00:19:13.125 "dma_device_type": 1 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.125 "dma_device_type": 2 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "dma_device_id": "system", 00:19:13.125 "dma_device_type": 1 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.125 "dma_device_type": 2 00:19:13.125 } 00:19:13.125 ], 00:19:13.125 "driver_specific": { 00:19:13.125 "raid": { 00:19:13.125 "uuid": "d4f541f7-51d2-423f-a7d8-41c04692b75b", 00:19:13.125 "strip_size_kb": 64, 00:19:13.125 "state": "online", 00:19:13.125 "raid_level": "raid0", 00:19:13.125 "superblock": true, 00:19:13.125 "num_base_bdevs": 4, 00:19:13.125 "num_base_bdevs_discovered": 4, 00:19:13.125 "num_base_bdevs_operational": 4, 00:19:13.125 "base_bdevs_list": [ 00:19:13.125 { 00:19:13.125 "name": "pt1", 00:19:13.125 "uuid": "cff71c90-835c-54cd-a9a4-ce088a32a8e4", 00:19:13.125 "is_configured": true, 00:19:13.125 "data_offset": 2048, 00:19:13.125 "data_size": 63488 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "name": "pt2", 00:19:13.125 "uuid": "37314f72-0be4-548b-b602-b88827c46a31", 00:19:13.125 "is_configured": true, 00:19:13.125 "data_offset": 2048, 00:19:13.125 "data_size": 63488 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "name": "pt3", 00:19:13.125 "uuid": "cc6f5c45-2b05-5e1d-994c-a077d3408f40", 00:19:13.125 "is_configured": true, 00:19:13.125 "data_offset": 2048, 00:19:13.125 "data_size": 63488 00:19:13.125 }, 00:19:13.125 { 00:19:13.125 "name": "pt4", 00:19:13.125 "uuid": "2e12da70-ef6c-5e25-a802-76825374493a", 00:19:13.125 "is_configured": true, 00:19:13.125 "data_offset": 2048, 00:19:13.125 "data_size": 63488 00:19:13.125 } 00:19:13.125 ] 00:19:13.125 } 00:19:13.125 } 00:19:13.125 }' 00:19:13.125 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:13.383 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:13.383 pt2 00:19:13.383 pt3 00:19:13.383 pt4' 00:19:13.383 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:13.383 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:13.383 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:13.383 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:13.383 "name": "pt1", 00:19:13.383 "aliases": [ 00:19:13.383 "cff71c90-835c-54cd-a9a4-ce088a32a8e4" 00:19:13.383 ], 00:19:13.383 "product_name": "passthru", 00:19:13.383 "block_size": 512, 00:19:13.383 "num_blocks": 65536, 00:19:13.383 "uuid": "cff71c90-835c-54cd-a9a4-ce088a32a8e4", 00:19:13.383 "assigned_rate_limits": { 00:19:13.383 "rw_ios_per_sec": 0, 00:19:13.383 "rw_mbytes_per_sec": 0, 00:19:13.383 "r_mbytes_per_sec": 0, 00:19:13.383 "w_mbytes_per_sec": 0 00:19:13.383 }, 00:19:13.383 "claimed": true, 00:19:13.383 "claim_type": "exclusive_write", 00:19:13.383 "zoned": false, 00:19:13.383 "supported_io_types": { 00:19:13.383 "read": true, 00:19:13.383 "write": true, 00:19:13.383 "unmap": true, 00:19:13.383 "write_zeroes": true, 00:19:13.383 "flush": true, 00:19:13.383 "reset": true, 00:19:13.383 "compare": false, 00:19:13.383 "compare_and_write": false, 00:19:13.383 "abort": true, 00:19:13.383 "nvme_admin": false, 00:19:13.383 "nvme_io": false 00:19:13.383 }, 00:19:13.383 "memory_domains": [ 00:19:13.383 { 00:19:13.383 "dma_device_id": "system", 00:19:13.383 "dma_device_type": 1 00:19:13.383 }, 00:19:13.384 { 00:19:13.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.384 "dma_device_type": 2 00:19:13.384 } 00:19:13.384 ], 00:19:13.384 "driver_specific": { 00:19:13.384 "passthru": { 00:19:13.384 "name": "pt1", 00:19:13.384 "base_bdev_name": "malloc1" 00:19:13.384 } 00:19:13.384 } 00:19:13.384 }' 00:19:13.384 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:13.642 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:13.642 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:13.642 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:13.642 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:13.642 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:13.642 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:13.900 23:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:13.900 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:13.900 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:13.900 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:13.900 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:13.900 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:13.900 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:13.900 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:14.160 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:14.160 "name": "pt2", 00:19:14.160 "aliases": [ 00:19:14.160 "37314f72-0be4-548b-b602-b88827c46a31" 00:19:14.160 ], 00:19:14.160 "product_name": "passthru", 00:19:14.160 "block_size": 512, 00:19:14.160 "num_blocks": 65536, 00:19:14.160 "uuid": "37314f72-0be4-548b-b602-b88827c46a31", 00:19:14.160 "assigned_rate_limits": { 00:19:14.160 "rw_ios_per_sec": 0, 00:19:14.160 "rw_mbytes_per_sec": 0, 00:19:14.160 "r_mbytes_per_sec": 0, 00:19:14.160 "w_mbytes_per_sec": 0 00:19:14.160 }, 00:19:14.160 "claimed": true, 00:19:14.160 "claim_type": "exclusive_write", 00:19:14.160 "zoned": false, 00:19:14.160 "supported_io_types": { 00:19:14.160 "read": true, 00:19:14.160 "write": true, 00:19:14.160 "unmap": true, 00:19:14.160 "write_zeroes": true, 00:19:14.160 "flush": true, 00:19:14.160 "reset": true, 00:19:14.160 "compare": false, 00:19:14.160 "compare_and_write": false, 00:19:14.160 "abort": true, 00:19:14.160 "nvme_admin": false, 00:19:14.160 "nvme_io": false 00:19:14.160 }, 00:19:14.160 "memory_domains": [ 00:19:14.160 { 00:19:14.160 "dma_device_id": "system", 00:19:14.160 "dma_device_type": 1 00:19:14.160 }, 00:19:14.160 { 00:19:14.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.160 "dma_device_type": 2 00:19:14.160 } 00:19:14.160 ], 00:19:14.160 "driver_specific": { 00:19:14.160 "passthru": { 00:19:14.160 "name": "pt2", 00:19:14.160 "base_bdev_name": "malloc2" 00:19:14.160 } 00:19:14.160 } 00:19:14.160 }' 00:19:14.160 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:14.419 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:14.419 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:14.419 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:14.419 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:14.419 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:14.419 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:14.419 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:14.677 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:14.677 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:14.677 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:14.677 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:14.677 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:14.677 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:14.677 23:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:14.969 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:14.969 "name": "pt3", 00:19:14.969 "aliases": [ 00:19:14.969 "cc6f5c45-2b05-5e1d-994c-a077d3408f40" 00:19:14.969 ], 00:19:14.969 "product_name": "passthru", 00:19:14.969 "block_size": 512, 00:19:14.969 "num_blocks": 65536, 00:19:14.969 "uuid": "cc6f5c45-2b05-5e1d-994c-a077d3408f40", 00:19:14.969 "assigned_rate_limits": { 00:19:14.969 "rw_ios_per_sec": 0, 00:19:14.969 "rw_mbytes_per_sec": 0, 00:19:14.969 "r_mbytes_per_sec": 0, 00:19:14.969 "w_mbytes_per_sec": 0 00:19:14.969 }, 00:19:14.969 "claimed": true, 00:19:14.969 "claim_type": "exclusive_write", 00:19:14.969 "zoned": false, 00:19:14.969 "supported_io_types": { 00:19:14.969 "read": true, 00:19:14.969 "write": true, 00:19:14.969 "unmap": true, 00:19:14.969 "write_zeroes": true, 00:19:14.969 "flush": true, 00:19:14.969 "reset": true, 00:19:14.969 "compare": false, 00:19:14.969 "compare_and_write": false, 00:19:14.969 "abort": true, 00:19:14.969 "nvme_admin": false, 00:19:14.969 "nvme_io": false 00:19:14.969 }, 00:19:14.969 "memory_domains": [ 00:19:14.969 { 00:19:14.969 "dma_device_id": "system", 00:19:14.969 "dma_device_type": 1 00:19:14.969 }, 00:19:14.969 { 00:19:14.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.969 "dma_device_type": 2 00:19:14.969 } 00:19:14.969 ], 00:19:14.969 "driver_specific": { 00:19:14.969 "passthru": { 00:19:14.969 "name": "pt3", 00:19:14.969 "base_bdev_name": "malloc3" 00:19:14.969 } 00:19:14.969 } 00:19:14.969 }' 00:19:14.969 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:14.969 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:14.969 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:14.969 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:15.228 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:15.228 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:15.228 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:15.228 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:15.228 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:15.228 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:15.228 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:15.487 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:15.487 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:15.487 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:15.487 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:15.746 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:15.746 "name": "pt4", 00:19:15.746 "aliases": [ 00:19:15.746 "2e12da70-ef6c-5e25-a802-76825374493a" 00:19:15.746 ], 00:19:15.746 "product_name": "passthru", 00:19:15.746 "block_size": 512, 00:19:15.746 "num_blocks": 65536, 00:19:15.746 "uuid": "2e12da70-ef6c-5e25-a802-76825374493a", 00:19:15.746 "assigned_rate_limits": { 00:19:15.746 "rw_ios_per_sec": 0, 00:19:15.746 "rw_mbytes_per_sec": 0, 00:19:15.746 "r_mbytes_per_sec": 0, 00:19:15.746 "w_mbytes_per_sec": 0 00:19:15.746 }, 00:19:15.746 "claimed": true, 00:19:15.746 "claim_type": "exclusive_write", 00:19:15.746 "zoned": false, 00:19:15.746 "supported_io_types": { 00:19:15.746 "read": true, 00:19:15.746 "write": true, 00:19:15.746 "unmap": true, 00:19:15.746 "write_zeroes": true, 00:19:15.746 "flush": true, 00:19:15.746 "reset": true, 00:19:15.746 "compare": false, 00:19:15.746 "compare_and_write": false, 00:19:15.746 "abort": true, 00:19:15.746 "nvme_admin": false, 00:19:15.746 "nvme_io": false 00:19:15.746 }, 00:19:15.746 "memory_domains": [ 00:19:15.746 { 00:19:15.746 "dma_device_id": "system", 00:19:15.746 "dma_device_type": 1 00:19:15.746 }, 00:19:15.746 { 00:19:15.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.746 "dma_device_type": 2 00:19:15.746 } 00:19:15.746 ], 00:19:15.746 "driver_specific": { 00:19:15.746 "passthru": { 00:19:15.746 "name": "pt4", 00:19:15.746 "base_bdev_name": "malloc4" 00:19:15.746 } 00:19:15.746 } 00:19:15.746 }' 00:19:15.746 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:15.746 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:15.746 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:15.746 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:15.746 23:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:15.746 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:15.746 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:16.005 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:16.005 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:16.005 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:16.005 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:16.005 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:16.005 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:16.005 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:16.349 [2024-05-14 23:34:39.443975] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d4f541f7-51d2-423f-a7d8-41c04692b75b '!=' d4f541f7-51d2-423f-a7d8-41c04692b75b ']' 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 66423 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 66423 ']' 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 66423 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66423 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:16.349 killing process with pid 66423 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66423' 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 66423 00:19:16.349 23:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 66423 00:19:16.349 [2024-05-14 23:34:39.483465] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.349 [2024-05-14 23:34:39.483545] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.350 [2024-05-14 23:34:39.483595] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.350 [2024-05-14 23:34:39.483605] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:19:16.610 [2024-05-14 23:34:39.818989] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.985 23:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:19:17.985 00:19:17.985 real 0m17.916s 00:19:17.986 user 0m32.584s 00:19:17.986 sys 0m1.879s 00:19:17.986 23:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:17.986 23:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.986 ************************************ 00:19:17.986 END TEST raid_superblock_test 00:19:17.986 ************************************ 00:19:17.986 23:34:41 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:19:17.986 23:34:41 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:17.986 23:34:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:17.986 23:34:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:17.986 23:34:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.986 ************************************ 00:19:17.986 START TEST raid_state_function_test 00:19:17.986 ************************************ 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:17.986 Process raid pid: 66982 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=66982 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 66982' 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 66982 /var/tmp/spdk-raid.sock 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 66982 ']' 00:19:17.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:17.986 23:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.986 [2024-05-14 23:34:41.270752] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:19:17.986 [2024-05-14 23:34:41.270934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.245 [2024-05-14 23:34:41.425203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.504 [2024-05-14 23:34:41.645978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.762 [2024-05-14 23:34:41.843815] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:19.021 [2024-05-14 23:34:42.245698] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:19.021 [2024-05-14 23:34:42.245789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:19.021 [2024-05-14 23:34:42.245808] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:19.021 [2024-05-14 23:34:42.245831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:19.021 [2024-05-14 23:34:42.245842] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:19.021 [2024-05-14 23:34:42.245905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:19.021 [2024-05-14 23:34:42.245919] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:19.021 [2024-05-14 23:34:42.245951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.021 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.279 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.279 "name": "Existed_Raid", 00:19:19.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.279 "strip_size_kb": 64, 00:19:19.279 "state": "configuring", 00:19:19.279 "raid_level": "concat", 00:19:19.279 "superblock": false, 00:19:19.279 "num_base_bdevs": 4, 00:19:19.279 "num_base_bdevs_discovered": 0, 00:19:19.279 "num_base_bdevs_operational": 4, 00:19:19.279 "base_bdevs_list": [ 00:19:19.279 { 00:19:19.279 "name": "BaseBdev1", 00:19:19.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.279 "is_configured": false, 00:19:19.279 "data_offset": 0, 00:19:19.279 "data_size": 0 00:19:19.279 }, 00:19:19.279 { 00:19:19.279 "name": "BaseBdev2", 00:19:19.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.279 "is_configured": false, 00:19:19.279 "data_offset": 0, 00:19:19.279 "data_size": 0 00:19:19.279 }, 00:19:19.279 { 00:19:19.279 "name": "BaseBdev3", 00:19:19.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.279 "is_configured": false, 00:19:19.279 "data_offset": 0, 00:19:19.279 "data_size": 0 00:19:19.279 }, 00:19:19.279 { 00:19:19.279 "name": "BaseBdev4", 00:19:19.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.279 "is_configured": false, 00:19:19.279 "data_offset": 0, 00:19:19.279 "data_size": 0 00:19:19.279 } 00:19:19.279 ] 00:19:19.279 }' 00:19:19.279 23:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.279 23:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.215 23:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:20.215 [2024-05-14 23:34:43.389790] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:20.215 [2024-05-14 23:34:43.389871] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:19:20.215 23:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:20.473 [2024-05-14 23:34:43.593839] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:20.473 [2024-05-14 23:34:43.593948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:20.473 [2024-05-14 23:34:43.594000] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:20.473 [2024-05-14 23:34:43.594039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:20.473 [2024-05-14 23:34:43.594052] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:20.473 [2024-05-14 23:34:43.594074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:20.473 [2024-05-14 23:34:43.594085] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:20.473 [2024-05-14 23:34:43.594116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:20.473 23:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:20.732 [2024-05-14 23:34:43.884959] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.732 BaseBdev1 00:19:20.732 23:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:19:20.732 23:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:20.732 23:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:20.732 23:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:20.732 23:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:20.732 23:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:20.732 23:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.991 23:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:21.250 [ 00:19:21.250 { 00:19:21.250 "name": "BaseBdev1", 00:19:21.250 "aliases": [ 00:19:21.250 "58ed67bf-c400-4ab3-8936-6701ee02869b" 00:19:21.250 ], 00:19:21.250 "product_name": "Malloc disk", 00:19:21.250 "block_size": 512, 00:19:21.250 "num_blocks": 65536, 00:19:21.250 "uuid": "58ed67bf-c400-4ab3-8936-6701ee02869b", 00:19:21.250 "assigned_rate_limits": { 00:19:21.250 "rw_ios_per_sec": 0, 00:19:21.250 "rw_mbytes_per_sec": 0, 00:19:21.250 "r_mbytes_per_sec": 0, 00:19:21.250 "w_mbytes_per_sec": 0 00:19:21.250 }, 00:19:21.250 "claimed": true, 00:19:21.250 "claim_type": "exclusive_write", 00:19:21.250 "zoned": false, 00:19:21.250 "supported_io_types": { 00:19:21.250 "read": true, 00:19:21.250 "write": true, 00:19:21.250 "unmap": true, 00:19:21.250 "write_zeroes": true, 00:19:21.250 "flush": true, 00:19:21.250 "reset": true, 00:19:21.250 "compare": false, 00:19:21.250 "compare_and_write": false, 00:19:21.250 "abort": true, 00:19:21.250 "nvme_admin": false, 00:19:21.250 "nvme_io": false 00:19:21.250 }, 00:19:21.250 "memory_domains": [ 00:19:21.250 { 00:19:21.250 "dma_device_id": "system", 00:19:21.250 "dma_device_type": 1 00:19:21.250 }, 00:19:21.250 { 00:19:21.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.250 "dma_device_type": 2 00:19:21.250 } 00:19:21.250 ], 00:19:21.250 "driver_specific": {} 00:19:21.250 } 00:19:21.250 ] 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.250 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.509 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.509 "name": "Existed_Raid", 00:19:21.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.509 "strip_size_kb": 64, 00:19:21.509 "state": "configuring", 00:19:21.509 "raid_level": "concat", 00:19:21.509 "superblock": false, 00:19:21.509 "num_base_bdevs": 4, 00:19:21.509 "num_base_bdevs_discovered": 1, 00:19:21.509 "num_base_bdevs_operational": 4, 00:19:21.509 "base_bdevs_list": [ 00:19:21.509 { 00:19:21.509 "name": "BaseBdev1", 00:19:21.509 "uuid": "58ed67bf-c400-4ab3-8936-6701ee02869b", 00:19:21.509 "is_configured": true, 00:19:21.509 "data_offset": 0, 00:19:21.509 "data_size": 65536 00:19:21.509 }, 00:19:21.509 { 00:19:21.509 "name": "BaseBdev2", 00:19:21.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.509 "is_configured": false, 00:19:21.509 "data_offset": 0, 00:19:21.509 "data_size": 0 00:19:21.509 }, 00:19:21.509 { 00:19:21.509 "name": "BaseBdev3", 00:19:21.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.509 "is_configured": false, 00:19:21.509 "data_offset": 0, 00:19:21.509 "data_size": 0 00:19:21.509 }, 00:19:21.509 { 00:19:21.509 "name": "BaseBdev4", 00:19:21.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.509 "is_configured": false, 00:19:21.509 "data_offset": 0, 00:19:21.509 "data_size": 0 00:19:21.509 } 00:19:21.509 ] 00:19:21.509 }' 00:19:21.509 23:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.509 23:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.077 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:22.335 [2024-05-14 23:34:45.449348] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:22.335 [2024-05-14 23:34:45.449440] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:19:22.335 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:22.593 [2024-05-14 23:34:45.649427] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.593 [2024-05-14 23:34:45.650977] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:22.593 [2024-05-14 23:34:45.651062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:22.593 [2024-05-14 23:34:45.651097] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:22.593 [2024-05-14 23:34:45.651131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:22.593 [2024-05-14 23:34:45.651145] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:22.593 [2024-05-14 23:34:45.651183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.593 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.852 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.852 "name": "Existed_Raid", 00:19:22.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.852 "strip_size_kb": 64, 00:19:22.852 "state": "configuring", 00:19:22.852 "raid_level": "concat", 00:19:22.852 "superblock": false, 00:19:22.852 "num_base_bdevs": 4, 00:19:22.852 "num_base_bdevs_discovered": 1, 00:19:22.852 "num_base_bdevs_operational": 4, 00:19:22.852 "base_bdevs_list": [ 00:19:22.852 { 00:19:22.852 "name": "BaseBdev1", 00:19:22.852 "uuid": "58ed67bf-c400-4ab3-8936-6701ee02869b", 00:19:22.852 "is_configured": true, 00:19:22.852 "data_offset": 0, 00:19:22.852 "data_size": 65536 00:19:22.852 }, 00:19:22.852 { 00:19:22.852 "name": "BaseBdev2", 00:19:22.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.852 "is_configured": false, 00:19:22.852 "data_offset": 0, 00:19:22.852 "data_size": 0 00:19:22.852 }, 00:19:22.852 { 00:19:22.852 "name": "BaseBdev3", 00:19:22.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.852 "is_configured": false, 00:19:22.852 "data_offset": 0, 00:19:22.852 "data_size": 0 00:19:22.852 }, 00:19:22.852 { 00:19:22.852 "name": "BaseBdev4", 00:19:22.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.852 "is_configured": false, 00:19:22.852 "data_offset": 0, 00:19:22.852 "data_size": 0 00:19:22.852 } 00:19:22.852 ] 00:19:22.852 }' 00:19:22.852 23:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.852 23:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.418 23:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:23.676 BaseBdev2 00:19:23.676 [2024-05-14 23:34:46.899304] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.676 23:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:19:23.676 23:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:23.676 23:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:23.676 23:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:23.676 23:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:23.676 23:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:23.676 23:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.935 23:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:24.193 [ 00:19:24.193 { 00:19:24.193 "name": "BaseBdev2", 00:19:24.193 "aliases": [ 00:19:24.193 "463bca26-24a0-42ba-a636-f4a40ae09417" 00:19:24.193 ], 00:19:24.193 "product_name": "Malloc disk", 00:19:24.193 "block_size": 512, 00:19:24.193 "num_blocks": 65536, 00:19:24.193 "uuid": "463bca26-24a0-42ba-a636-f4a40ae09417", 00:19:24.193 "assigned_rate_limits": { 00:19:24.193 "rw_ios_per_sec": 0, 00:19:24.193 "rw_mbytes_per_sec": 0, 00:19:24.193 "r_mbytes_per_sec": 0, 00:19:24.193 "w_mbytes_per_sec": 0 00:19:24.193 }, 00:19:24.194 "claimed": true, 00:19:24.194 "claim_type": "exclusive_write", 00:19:24.194 "zoned": false, 00:19:24.194 "supported_io_types": { 00:19:24.194 "read": true, 00:19:24.194 "write": true, 00:19:24.194 "unmap": true, 00:19:24.194 "write_zeroes": true, 00:19:24.194 "flush": true, 00:19:24.194 "reset": true, 00:19:24.194 "compare": false, 00:19:24.194 "compare_and_write": false, 00:19:24.194 "abort": true, 00:19:24.194 "nvme_admin": false, 00:19:24.194 "nvme_io": false 00:19:24.194 }, 00:19:24.194 "memory_domains": [ 00:19:24.194 { 00:19:24.194 "dma_device_id": "system", 00:19:24.194 "dma_device_type": 1 00:19:24.194 }, 00:19:24.194 { 00:19:24.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.194 "dma_device_type": 2 00:19:24.194 } 00:19:24.194 ], 00:19:24.194 "driver_specific": {} 00:19:24.194 } 00:19:24.194 ] 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.194 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.452 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.452 "name": "Existed_Raid", 00:19:24.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.452 "strip_size_kb": 64, 00:19:24.452 "state": "configuring", 00:19:24.452 "raid_level": "concat", 00:19:24.452 "superblock": false, 00:19:24.452 "num_base_bdevs": 4, 00:19:24.452 "num_base_bdevs_discovered": 2, 00:19:24.452 "num_base_bdevs_operational": 4, 00:19:24.452 "base_bdevs_list": [ 00:19:24.452 { 00:19:24.452 "name": "BaseBdev1", 00:19:24.452 "uuid": "58ed67bf-c400-4ab3-8936-6701ee02869b", 00:19:24.452 "is_configured": true, 00:19:24.452 "data_offset": 0, 00:19:24.452 "data_size": 65536 00:19:24.452 }, 00:19:24.452 { 00:19:24.452 "name": "BaseBdev2", 00:19:24.452 "uuid": "463bca26-24a0-42ba-a636-f4a40ae09417", 00:19:24.452 "is_configured": true, 00:19:24.452 "data_offset": 0, 00:19:24.452 "data_size": 65536 00:19:24.452 }, 00:19:24.452 { 00:19:24.452 "name": "BaseBdev3", 00:19:24.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.452 "is_configured": false, 00:19:24.452 "data_offset": 0, 00:19:24.452 "data_size": 0 00:19:24.452 }, 00:19:24.452 { 00:19:24.452 "name": "BaseBdev4", 00:19:24.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.452 "is_configured": false, 00:19:24.452 "data_offset": 0, 00:19:24.452 "data_size": 0 00:19:24.452 } 00:19:24.452 ] 00:19:24.452 }' 00:19:24.452 23:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.452 23:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.019 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:25.278 BaseBdev3 00:19:25.278 [2024-05-14 23:34:48.484395] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:25.278 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:19:25.278 23:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:25.278 23:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:25.278 23:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:25.278 23:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:25.278 23:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:25.278 23:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.536 23:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:25.795 [ 00:19:25.795 { 00:19:25.795 "name": "BaseBdev3", 00:19:25.795 "aliases": [ 00:19:25.795 "59702b7d-bbf6-4a77-902d-70ceda3117b1" 00:19:25.795 ], 00:19:25.795 "product_name": "Malloc disk", 00:19:25.795 "block_size": 512, 00:19:25.795 "num_blocks": 65536, 00:19:25.795 "uuid": "59702b7d-bbf6-4a77-902d-70ceda3117b1", 00:19:25.795 "assigned_rate_limits": { 00:19:25.795 "rw_ios_per_sec": 0, 00:19:25.795 "rw_mbytes_per_sec": 0, 00:19:25.795 "r_mbytes_per_sec": 0, 00:19:25.795 "w_mbytes_per_sec": 0 00:19:25.795 }, 00:19:25.795 "claimed": true, 00:19:25.795 "claim_type": "exclusive_write", 00:19:25.795 "zoned": false, 00:19:25.795 "supported_io_types": { 00:19:25.795 "read": true, 00:19:25.795 "write": true, 00:19:25.795 "unmap": true, 00:19:25.795 "write_zeroes": true, 00:19:25.795 "flush": true, 00:19:25.795 "reset": true, 00:19:25.795 "compare": false, 00:19:25.795 "compare_and_write": false, 00:19:25.795 "abort": true, 00:19:25.795 "nvme_admin": false, 00:19:25.795 "nvme_io": false 00:19:25.795 }, 00:19:25.795 "memory_domains": [ 00:19:25.795 { 00:19:25.795 "dma_device_id": "system", 00:19:25.795 "dma_device_type": 1 00:19:25.795 }, 00:19:25.795 { 00:19:25.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.795 "dma_device_type": 2 00:19:25.795 } 00:19:25.795 ], 00:19:25.795 "driver_specific": {} 00:19:25.795 } 00:19:25.795 ] 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.795 23:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.054 23:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.054 "name": "Existed_Raid", 00:19:26.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.055 "strip_size_kb": 64, 00:19:26.055 "state": "configuring", 00:19:26.055 "raid_level": "concat", 00:19:26.055 "superblock": false, 00:19:26.055 "num_base_bdevs": 4, 00:19:26.055 "num_base_bdevs_discovered": 3, 00:19:26.055 "num_base_bdevs_operational": 4, 00:19:26.055 "base_bdevs_list": [ 00:19:26.055 { 00:19:26.055 "name": "BaseBdev1", 00:19:26.055 "uuid": "58ed67bf-c400-4ab3-8936-6701ee02869b", 00:19:26.055 "is_configured": true, 00:19:26.055 "data_offset": 0, 00:19:26.055 "data_size": 65536 00:19:26.055 }, 00:19:26.055 { 00:19:26.055 "name": "BaseBdev2", 00:19:26.055 "uuid": "463bca26-24a0-42ba-a636-f4a40ae09417", 00:19:26.055 "is_configured": true, 00:19:26.055 "data_offset": 0, 00:19:26.055 "data_size": 65536 00:19:26.055 }, 00:19:26.055 { 00:19:26.055 "name": "BaseBdev3", 00:19:26.055 "uuid": "59702b7d-bbf6-4a77-902d-70ceda3117b1", 00:19:26.055 "is_configured": true, 00:19:26.055 "data_offset": 0, 00:19:26.055 "data_size": 65536 00:19:26.055 }, 00:19:26.055 { 00:19:26.055 "name": "BaseBdev4", 00:19:26.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.055 "is_configured": false, 00:19:26.055 "data_offset": 0, 00:19:26.055 "data_size": 0 00:19:26.055 } 00:19:26.055 ] 00:19:26.055 }' 00:19:26.055 23:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.055 23:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.621 23:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:26.880 [2024-05-14 23:34:49.995671] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:26.880 [2024-05-14 23:34:49.995716] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:19:26.880 [2024-05-14 23:34:49.995726] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:26.880 [2024-05-14 23:34:49.995840] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:26.880 [2024-05-14 23:34:49.996041] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:19:26.880 [2024-05-14 23:34:49.996055] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:19:26.880 BaseBdev4 00:19:26.880 [2024-05-14 23:34:49.996578] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.880 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:19:26.880 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:19:26.880 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:26.880 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:26.880 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:26.880 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:26.880 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.138 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:27.396 [ 00:19:27.396 { 00:19:27.396 "name": "BaseBdev4", 00:19:27.396 "aliases": [ 00:19:27.396 "9b4353b1-8781-479e-a9e5-d120dbc35585" 00:19:27.396 ], 00:19:27.396 "product_name": "Malloc disk", 00:19:27.396 "block_size": 512, 00:19:27.396 "num_blocks": 65536, 00:19:27.396 "uuid": "9b4353b1-8781-479e-a9e5-d120dbc35585", 00:19:27.396 "assigned_rate_limits": { 00:19:27.396 "rw_ios_per_sec": 0, 00:19:27.396 "rw_mbytes_per_sec": 0, 00:19:27.396 "r_mbytes_per_sec": 0, 00:19:27.396 "w_mbytes_per_sec": 0 00:19:27.396 }, 00:19:27.396 "claimed": true, 00:19:27.396 "claim_type": "exclusive_write", 00:19:27.396 "zoned": false, 00:19:27.396 "supported_io_types": { 00:19:27.396 "read": true, 00:19:27.396 "write": true, 00:19:27.396 "unmap": true, 00:19:27.396 "write_zeroes": true, 00:19:27.396 "flush": true, 00:19:27.396 "reset": true, 00:19:27.396 "compare": false, 00:19:27.396 "compare_and_write": false, 00:19:27.396 "abort": true, 00:19:27.396 "nvme_admin": false, 00:19:27.396 "nvme_io": false 00:19:27.396 }, 00:19:27.396 "memory_domains": [ 00:19:27.396 { 00:19:27.396 "dma_device_id": "system", 00:19:27.396 "dma_device_type": 1 00:19:27.396 }, 00:19:27.396 { 00:19:27.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.396 "dma_device_type": 2 00:19:27.396 } 00:19:27.396 ], 00:19:27.396 "driver_specific": {} 00:19:27.396 } 00:19:27.396 ] 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.396 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.655 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.655 "name": "Existed_Raid", 00:19:27.655 "uuid": "576ed3ae-00fe-4621-a2dc-5da6390cd005", 00:19:27.655 "strip_size_kb": 64, 00:19:27.655 "state": "online", 00:19:27.655 "raid_level": "concat", 00:19:27.655 "superblock": false, 00:19:27.655 "num_base_bdevs": 4, 00:19:27.655 "num_base_bdevs_discovered": 4, 00:19:27.655 "num_base_bdevs_operational": 4, 00:19:27.655 "base_bdevs_list": [ 00:19:27.655 { 00:19:27.655 "name": "BaseBdev1", 00:19:27.655 "uuid": "58ed67bf-c400-4ab3-8936-6701ee02869b", 00:19:27.655 "is_configured": true, 00:19:27.655 "data_offset": 0, 00:19:27.655 "data_size": 65536 00:19:27.655 }, 00:19:27.655 { 00:19:27.655 "name": "BaseBdev2", 00:19:27.655 "uuid": "463bca26-24a0-42ba-a636-f4a40ae09417", 00:19:27.655 "is_configured": true, 00:19:27.655 "data_offset": 0, 00:19:27.655 "data_size": 65536 00:19:27.655 }, 00:19:27.655 { 00:19:27.655 "name": "BaseBdev3", 00:19:27.655 "uuid": "59702b7d-bbf6-4a77-902d-70ceda3117b1", 00:19:27.655 "is_configured": true, 00:19:27.655 "data_offset": 0, 00:19:27.655 "data_size": 65536 00:19:27.655 }, 00:19:27.655 { 00:19:27.655 "name": "BaseBdev4", 00:19:27.655 "uuid": "9b4353b1-8781-479e-a9e5-d120dbc35585", 00:19:27.655 "is_configured": true, 00:19:27.655 "data_offset": 0, 00:19:27.655 "data_size": 65536 00:19:27.655 } 00:19:27.655 ] 00:19:27.655 }' 00:19:27.655 23:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.655 23:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.223 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:19:28.223 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:28.223 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:28.223 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:28.223 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:28.223 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:28.223 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:28.223 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:28.481 [2024-05-14 23:34:51.688158] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.481 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:28.481 "name": "Existed_Raid", 00:19:28.481 "aliases": [ 00:19:28.481 "576ed3ae-00fe-4621-a2dc-5da6390cd005" 00:19:28.481 ], 00:19:28.481 "product_name": "Raid Volume", 00:19:28.481 "block_size": 512, 00:19:28.481 "num_blocks": 262144, 00:19:28.481 "uuid": "576ed3ae-00fe-4621-a2dc-5da6390cd005", 00:19:28.481 "assigned_rate_limits": { 00:19:28.481 "rw_ios_per_sec": 0, 00:19:28.481 "rw_mbytes_per_sec": 0, 00:19:28.481 "r_mbytes_per_sec": 0, 00:19:28.481 "w_mbytes_per_sec": 0 00:19:28.481 }, 00:19:28.481 "claimed": false, 00:19:28.481 "zoned": false, 00:19:28.481 "supported_io_types": { 00:19:28.481 "read": true, 00:19:28.481 "write": true, 00:19:28.481 "unmap": true, 00:19:28.481 "write_zeroes": true, 00:19:28.481 "flush": true, 00:19:28.481 "reset": true, 00:19:28.481 "compare": false, 00:19:28.481 "compare_and_write": false, 00:19:28.481 "abort": false, 00:19:28.481 "nvme_admin": false, 00:19:28.481 "nvme_io": false 00:19:28.481 }, 00:19:28.481 "memory_domains": [ 00:19:28.481 { 00:19:28.481 "dma_device_id": "system", 00:19:28.481 "dma_device_type": 1 00:19:28.481 }, 00:19:28.481 { 00:19:28.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.481 "dma_device_type": 2 00:19:28.481 }, 00:19:28.481 { 00:19:28.481 "dma_device_id": "system", 00:19:28.481 "dma_device_type": 1 00:19:28.481 }, 00:19:28.481 { 00:19:28.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.481 "dma_device_type": 2 00:19:28.481 }, 00:19:28.481 { 00:19:28.481 "dma_device_id": "system", 00:19:28.481 "dma_device_type": 1 00:19:28.481 }, 00:19:28.481 { 00:19:28.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.481 "dma_device_type": 2 00:19:28.481 }, 00:19:28.481 { 00:19:28.481 "dma_device_id": "system", 00:19:28.481 "dma_device_type": 1 00:19:28.481 }, 00:19:28.481 { 00:19:28.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.481 "dma_device_type": 2 00:19:28.481 } 00:19:28.481 ], 00:19:28.481 "driver_specific": { 00:19:28.481 "raid": { 00:19:28.481 "uuid": "576ed3ae-00fe-4621-a2dc-5da6390cd005", 00:19:28.481 "strip_size_kb": 64, 00:19:28.481 "state": "online", 00:19:28.481 "raid_level": "concat", 00:19:28.481 "superblock": false, 00:19:28.481 "num_base_bdevs": 4, 00:19:28.481 "num_base_bdevs_discovered": 4, 00:19:28.481 "num_base_bdevs_operational": 4, 00:19:28.481 "base_bdevs_list": [ 00:19:28.481 { 00:19:28.481 "name": "BaseBdev1", 00:19:28.482 "uuid": "58ed67bf-c400-4ab3-8936-6701ee02869b", 00:19:28.482 "is_configured": true, 00:19:28.482 "data_offset": 0, 00:19:28.482 "data_size": 65536 00:19:28.482 }, 00:19:28.482 { 00:19:28.482 "name": "BaseBdev2", 00:19:28.482 "uuid": "463bca26-24a0-42ba-a636-f4a40ae09417", 00:19:28.482 "is_configured": true, 00:19:28.482 "data_offset": 0, 00:19:28.482 "data_size": 65536 00:19:28.482 }, 00:19:28.482 { 00:19:28.482 "name": "BaseBdev3", 00:19:28.482 "uuid": "59702b7d-bbf6-4a77-902d-70ceda3117b1", 00:19:28.482 "is_configured": true, 00:19:28.482 "data_offset": 0, 00:19:28.482 "data_size": 65536 00:19:28.482 }, 00:19:28.482 { 00:19:28.482 "name": "BaseBdev4", 00:19:28.482 "uuid": "9b4353b1-8781-479e-a9e5-d120dbc35585", 00:19:28.482 "is_configured": true, 00:19:28.482 "data_offset": 0, 00:19:28.482 "data_size": 65536 00:19:28.482 } 00:19:28.482 ] 00:19:28.482 } 00:19:28.482 } 00:19:28.482 }' 00:19:28.482 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:28.740 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:19:28.740 BaseBdev2 00:19:28.740 BaseBdev3 00:19:28.740 BaseBdev4' 00:19:28.740 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:28.740 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:28.740 23:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:28.740 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:28.740 "name": "BaseBdev1", 00:19:28.740 "aliases": [ 00:19:28.740 "58ed67bf-c400-4ab3-8936-6701ee02869b" 00:19:28.740 ], 00:19:28.740 "product_name": "Malloc disk", 00:19:28.740 "block_size": 512, 00:19:28.740 "num_blocks": 65536, 00:19:28.740 "uuid": "58ed67bf-c400-4ab3-8936-6701ee02869b", 00:19:28.740 "assigned_rate_limits": { 00:19:28.740 "rw_ios_per_sec": 0, 00:19:28.740 "rw_mbytes_per_sec": 0, 00:19:28.740 "r_mbytes_per_sec": 0, 00:19:28.740 "w_mbytes_per_sec": 0 00:19:28.740 }, 00:19:28.740 "claimed": true, 00:19:28.740 "claim_type": "exclusive_write", 00:19:28.740 "zoned": false, 00:19:28.740 "supported_io_types": { 00:19:28.740 "read": true, 00:19:28.740 "write": true, 00:19:28.740 "unmap": true, 00:19:28.740 "write_zeroes": true, 00:19:28.740 "flush": true, 00:19:28.740 "reset": true, 00:19:28.740 "compare": false, 00:19:28.740 "compare_and_write": false, 00:19:28.740 "abort": true, 00:19:28.740 "nvme_admin": false, 00:19:28.740 "nvme_io": false 00:19:28.740 }, 00:19:28.740 "memory_domains": [ 00:19:28.740 { 00:19:28.740 "dma_device_id": "system", 00:19:28.740 "dma_device_type": 1 00:19:28.740 }, 00:19:28.740 { 00:19:28.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.740 "dma_device_type": 2 00:19:28.740 } 00:19:28.740 ], 00:19:28.740 "driver_specific": {} 00:19:28.740 }' 00:19:28.740 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:28.998 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:28.998 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:28.998 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:28.998 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:28.998 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:28.998 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:29.257 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:29.257 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:29.257 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:29.257 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:29.257 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:29.257 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:29.257 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:29.257 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:29.824 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:29.824 "name": "BaseBdev2", 00:19:29.824 "aliases": [ 00:19:29.824 "463bca26-24a0-42ba-a636-f4a40ae09417" 00:19:29.824 ], 00:19:29.824 "product_name": "Malloc disk", 00:19:29.824 "block_size": 512, 00:19:29.824 "num_blocks": 65536, 00:19:29.824 "uuid": "463bca26-24a0-42ba-a636-f4a40ae09417", 00:19:29.824 "assigned_rate_limits": { 00:19:29.824 "rw_ios_per_sec": 0, 00:19:29.824 "rw_mbytes_per_sec": 0, 00:19:29.824 "r_mbytes_per_sec": 0, 00:19:29.824 "w_mbytes_per_sec": 0 00:19:29.824 }, 00:19:29.824 "claimed": true, 00:19:29.824 "claim_type": "exclusive_write", 00:19:29.824 "zoned": false, 00:19:29.824 "supported_io_types": { 00:19:29.824 "read": true, 00:19:29.824 "write": true, 00:19:29.824 "unmap": true, 00:19:29.824 "write_zeroes": true, 00:19:29.824 "flush": true, 00:19:29.824 "reset": true, 00:19:29.824 "compare": false, 00:19:29.824 "compare_and_write": false, 00:19:29.824 "abort": true, 00:19:29.824 "nvme_admin": false, 00:19:29.824 "nvme_io": false 00:19:29.824 }, 00:19:29.824 "memory_domains": [ 00:19:29.824 { 00:19:29.824 "dma_device_id": "system", 00:19:29.824 "dma_device_type": 1 00:19:29.824 }, 00:19:29.824 { 00:19:29.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.824 "dma_device_type": 2 00:19:29.824 } 00:19:29.824 ], 00:19:29.824 "driver_specific": {} 00:19:29.824 }' 00:19:29.824 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:29.824 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:29.824 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:29.824 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:29.824 23:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:29.824 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:29.824 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:29.824 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:30.081 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.081 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:30.081 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:30.081 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:30.081 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:30.081 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:30.081 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:30.340 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:30.340 "name": "BaseBdev3", 00:19:30.340 "aliases": [ 00:19:30.340 "59702b7d-bbf6-4a77-902d-70ceda3117b1" 00:19:30.340 ], 00:19:30.340 "product_name": "Malloc disk", 00:19:30.340 "block_size": 512, 00:19:30.340 "num_blocks": 65536, 00:19:30.340 "uuid": "59702b7d-bbf6-4a77-902d-70ceda3117b1", 00:19:30.340 "assigned_rate_limits": { 00:19:30.340 "rw_ios_per_sec": 0, 00:19:30.340 "rw_mbytes_per_sec": 0, 00:19:30.340 "r_mbytes_per_sec": 0, 00:19:30.340 "w_mbytes_per_sec": 0 00:19:30.340 }, 00:19:30.340 "claimed": true, 00:19:30.340 "claim_type": "exclusive_write", 00:19:30.340 "zoned": false, 00:19:30.340 "supported_io_types": { 00:19:30.340 "read": true, 00:19:30.340 "write": true, 00:19:30.340 "unmap": true, 00:19:30.340 "write_zeroes": true, 00:19:30.340 "flush": true, 00:19:30.340 "reset": true, 00:19:30.340 "compare": false, 00:19:30.340 "compare_and_write": false, 00:19:30.340 "abort": true, 00:19:30.340 "nvme_admin": false, 00:19:30.340 "nvme_io": false 00:19:30.340 }, 00:19:30.340 "memory_domains": [ 00:19:30.340 { 00:19:30.340 "dma_device_id": "system", 00:19:30.340 "dma_device_type": 1 00:19:30.340 }, 00:19:30.340 { 00:19:30.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.340 "dma_device_type": 2 00:19:30.340 } 00:19:30.340 ], 00:19:30.340 "driver_specific": {} 00:19:30.340 }' 00:19:30.340 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:30.340 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:30.612 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:30.613 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:30.613 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:30.613 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.613 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:30.613 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:30.869 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.869 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:30.869 23:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:30.869 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:30.869 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:30.869 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:30.869 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:31.126 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:31.126 "name": "BaseBdev4", 00:19:31.126 "aliases": [ 00:19:31.126 "9b4353b1-8781-479e-a9e5-d120dbc35585" 00:19:31.126 ], 00:19:31.126 "product_name": "Malloc disk", 00:19:31.126 "block_size": 512, 00:19:31.126 "num_blocks": 65536, 00:19:31.126 "uuid": "9b4353b1-8781-479e-a9e5-d120dbc35585", 00:19:31.126 "assigned_rate_limits": { 00:19:31.126 "rw_ios_per_sec": 0, 00:19:31.126 "rw_mbytes_per_sec": 0, 00:19:31.126 "r_mbytes_per_sec": 0, 00:19:31.126 "w_mbytes_per_sec": 0 00:19:31.126 }, 00:19:31.126 "claimed": true, 00:19:31.126 "claim_type": "exclusive_write", 00:19:31.126 "zoned": false, 00:19:31.126 "supported_io_types": { 00:19:31.126 "read": true, 00:19:31.126 "write": true, 00:19:31.126 "unmap": true, 00:19:31.126 "write_zeroes": true, 00:19:31.126 "flush": true, 00:19:31.126 "reset": true, 00:19:31.126 "compare": false, 00:19:31.126 "compare_and_write": false, 00:19:31.126 "abort": true, 00:19:31.126 "nvme_admin": false, 00:19:31.126 "nvme_io": false 00:19:31.126 }, 00:19:31.126 "memory_domains": [ 00:19:31.126 { 00:19:31.126 "dma_device_id": "system", 00:19:31.126 "dma_device_type": 1 00:19:31.126 }, 00:19:31.126 { 00:19:31.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.126 "dma_device_type": 2 00:19:31.126 } 00:19:31.126 ], 00:19:31.126 "driver_specific": {} 00:19:31.126 }' 00:19:31.126 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:31.126 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:31.384 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:31.384 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:31.384 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:31.384 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.384 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:31.384 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:31.642 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.642 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:31.642 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:31.642 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:31.642 23:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:31.901 [2024-05-14 23:34:54.988627] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.901 [2024-05-14 23:34:54.988662] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.901 [2024-05-14 23:34:54.988704] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.901 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.160 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.160 "name": "Existed_Raid", 00:19:32.160 "uuid": "576ed3ae-00fe-4621-a2dc-5da6390cd005", 00:19:32.160 "strip_size_kb": 64, 00:19:32.160 "state": "offline", 00:19:32.160 "raid_level": "concat", 00:19:32.160 "superblock": false, 00:19:32.160 "num_base_bdevs": 4, 00:19:32.160 "num_base_bdevs_discovered": 3, 00:19:32.160 "num_base_bdevs_operational": 3, 00:19:32.160 "base_bdevs_list": [ 00:19:32.160 { 00:19:32.160 "name": null, 00:19:32.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.160 "is_configured": false, 00:19:32.160 "data_offset": 0, 00:19:32.160 "data_size": 65536 00:19:32.160 }, 00:19:32.160 { 00:19:32.160 "name": "BaseBdev2", 00:19:32.160 "uuid": "463bca26-24a0-42ba-a636-f4a40ae09417", 00:19:32.160 "is_configured": true, 00:19:32.160 "data_offset": 0, 00:19:32.160 "data_size": 65536 00:19:32.160 }, 00:19:32.160 { 00:19:32.160 "name": "BaseBdev3", 00:19:32.160 "uuid": "59702b7d-bbf6-4a77-902d-70ceda3117b1", 00:19:32.160 "is_configured": true, 00:19:32.160 "data_offset": 0, 00:19:32.160 "data_size": 65536 00:19:32.160 }, 00:19:32.160 { 00:19:32.160 "name": "BaseBdev4", 00:19:32.160 "uuid": "9b4353b1-8781-479e-a9e5-d120dbc35585", 00:19:32.160 "is_configured": true, 00:19:32.160 "data_offset": 0, 00:19:32.160 "data_size": 65536 00:19:32.160 } 00:19:32.160 ] 00:19:32.160 }' 00:19:32.160 23:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.160 23:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.096 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:33.096 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.096 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.096 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:33.096 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:33.096 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.096 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:33.355 [2024-05-14 23:34:56.529907] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.355 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.355 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.355 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.355 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:33.614 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:33.614 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.614 23:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:33.872 [2024-05-14 23:34:57.019863] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:33.872 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.872 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.872 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.872 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:34.130 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:34.130 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:34.130 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:34.389 [2024-05-14 23:34:57.564688] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:34.389 [2024-05-14 23:34:57.564749] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:19:34.389 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:34.389 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:34.389 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.389 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:19:34.648 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:19:34.648 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:19:34.648 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:19:34.648 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:19:34.648 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:34.648 23:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:34.907 BaseBdev2 00:19:34.907 23:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:19:34.907 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:34.907 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:34.907 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:34.907 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:34.907 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:34.907 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.165 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:35.424 [ 00:19:35.424 { 00:19:35.424 "name": "BaseBdev2", 00:19:35.424 "aliases": [ 00:19:35.424 "5ebd42f4-0b8f-489b-8adf-68476a3ed086" 00:19:35.424 ], 00:19:35.424 "product_name": "Malloc disk", 00:19:35.424 "block_size": 512, 00:19:35.424 "num_blocks": 65536, 00:19:35.424 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:35.424 "assigned_rate_limits": { 00:19:35.424 "rw_ios_per_sec": 0, 00:19:35.424 "rw_mbytes_per_sec": 0, 00:19:35.424 "r_mbytes_per_sec": 0, 00:19:35.424 "w_mbytes_per_sec": 0 00:19:35.424 }, 00:19:35.424 "claimed": false, 00:19:35.424 "zoned": false, 00:19:35.424 "supported_io_types": { 00:19:35.424 "read": true, 00:19:35.424 "write": true, 00:19:35.424 "unmap": true, 00:19:35.424 "write_zeroes": true, 00:19:35.424 "flush": true, 00:19:35.424 "reset": true, 00:19:35.424 "compare": false, 00:19:35.424 "compare_and_write": false, 00:19:35.424 "abort": true, 00:19:35.424 "nvme_admin": false, 00:19:35.424 "nvme_io": false 00:19:35.424 }, 00:19:35.424 "memory_domains": [ 00:19:35.424 { 00:19:35.424 "dma_device_id": "system", 00:19:35.424 "dma_device_type": 1 00:19:35.424 }, 00:19:35.424 { 00:19:35.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.424 "dma_device_type": 2 00:19:35.424 } 00:19:35.424 ], 00:19:35.424 "driver_specific": {} 00:19:35.424 } 00:19:35.424 ] 00:19:35.424 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:35.424 23:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:35.424 23:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:35.424 23:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:35.682 BaseBdev3 00:19:35.682 23:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:19:35.682 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:35.682 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:35.682 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:35.682 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:35.682 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:35.682 23:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.940 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:36.199 [ 00:19:36.199 { 00:19:36.199 "name": "BaseBdev3", 00:19:36.199 "aliases": [ 00:19:36.199 "0de6ae00-6e26-4eac-84ef-20fe3c060c2a" 00:19:36.199 ], 00:19:36.199 "product_name": "Malloc disk", 00:19:36.199 "block_size": 512, 00:19:36.199 "num_blocks": 65536, 00:19:36.199 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:36.199 "assigned_rate_limits": { 00:19:36.199 "rw_ios_per_sec": 0, 00:19:36.199 "rw_mbytes_per_sec": 0, 00:19:36.199 "r_mbytes_per_sec": 0, 00:19:36.199 "w_mbytes_per_sec": 0 00:19:36.199 }, 00:19:36.199 "claimed": false, 00:19:36.199 "zoned": false, 00:19:36.199 "supported_io_types": { 00:19:36.199 "read": true, 00:19:36.199 "write": true, 00:19:36.199 "unmap": true, 00:19:36.199 "write_zeroes": true, 00:19:36.199 "flush": true, 00:19:36.199 "reset": true, 00:19:36.199 "compare": false, 00:19:36.199 "compare_and_write": false, 00:19:36.199 "abort": true, 00:19:36.199 "nvme_admin": false, 00:19:36.199 "nvme_io": false 00:19:36.199 }, 00:19:36.199 "memory_domains": [ 00:19:36.199 { 00:19:36.199 "dma_device_id": "system", 00:19:36.199 "dma_device_type": 1 00:19:36.199 }, 00:19:36.199 { 00:19:36.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.199 "dma_device_type": 2 00:19:36.199 } 00:19:36.199 ], 00:19:36.199 "driver_specific": {} 00:19:36.199 } 00:19:36.199 ] 00:19:36.199 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:36.199 23:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:36.199 23:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:36.199 23:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:36.199 BaseBdev4 00:19:36.457 23:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:19:36.457 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:19:36.457 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:36.457 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:36.457 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:36.457 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:36.457 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.457 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:36.716 [ 00:19:36.716 { 00:19:36.716 "name": "BaseBdev4", 00:19:36.716 "aliases": [ 00:19:36.716 "f585f913-5ad6-4711-925a-5f1dbcf840cb" 00:19:36.716 ], 00:19:36.716 "product_name": "Malloc disk", 00:19:36.716 "block_size": 512, 00:19:36.716 "num_blocks": 65536, 00:19:36.716 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:36.716 "assigned_rate_limits": { 00:19:36.716 "rw_ios_per_sec": 0, 00:19:36.716 "rw_mbytes_per_sec": 0, 00:19:36.716 "r_mbytes_per_sec": 0, 00:19:36.716 "w_mbytes_per_sec": 0 00:19:36.716 }, 00:19:36.716 "claimed": false, 00:19:36.716 "zoned": false, 00:19:36.716 "supported_io_types": { 00:19:36.716 "read": true, 00:19:36.716 "write": true, 00:19:36.716 "unmap": true, 00:19:36.716 "write_zeroes": true, 00:19:36.716 "flush": true, 00:19:36.716 "reset": true, 00:19:36.716 "compare": false, 00:19:36.716 "compare_and_write": false, 00:19:36.716 "abort": true, 00:19:36.716 "nvme_admin": false, 00:19:36.716 "nvme_io": false 00:19:36.716 }, 00:19:36.716 "memory_domains": [ 00:19:36.716 { 00:19:36.716 "dma_device_id": "system", 00:19:36.716 "dma_device_type": 1 00:19:36.716 }, 00:19:36.716 { 00:19:36.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.716 "dma_device_type": 2 00:19:36.716 } 00:19:36.716 ], 00:19:36.716 "driver_specific": {} 00:19:36.716 } 00:19:36.716 ] 00:19:36.716 23:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:36.716 23:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:19:36.716 23:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:19:36.716 23:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:36.974 [2024-05-14 23:35:00.095246] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:36.974 [2024-05-14 23:35:00.095336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:36.974 [2024-05-14 23:35:00.095385] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:36.974 [2024-05-14 23:35:00.097100] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:36.974 [2024-05-14 23:35:00.097143] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.974 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.233 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.233 "name": "Existed_Raid", 00:19:37.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.233 "strip_size_kb": 64, 00:19:37.233 "state": "configuring", 00:19:37.233 "raid_level": "concat", 00:19:37.233 "superblock": false, 00:19:37.233 "num_base_bdevs": 4, 00:19:37.233 "num_base_bdevs_discovered": 3, 00:19:37.233 "num_base_bdevs_operational": 4, 00:19:37.233 "base_bdevs_list": [ 00:19:37.233 { 00:19:37.233 "name": "BaseBdev1", 00:19:37.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.233 "is_configured": false, 00:19:37.233 "data_offset": 0, 00:19:37.233 "data_size": 0 00:19:37.233 }, 00:19:37.233 { 00:19:37.233 "name": "BaseBdev2", 00:19:37.233 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:37.233 "is_configured": true, 00:19:37.233 "data_offset": 0, 00:19:37.233 "data_size": 65536 00:19:37.233 }, 00:19:37.233 { 00:19:37.233 "name": "BaseBdev3", 00:19:37.233 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:37.233 "is_configured": true, 00:19:37.233 "data_offset": 0, 00:19:37.233 "data_size": 65536 00:19:37.233 }, 00:19:37.233 { 00:19:37.233 "name": "BaseBdev4", 00:19:37.233 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:37.233 "is_configured": true, 00:19:37.233 "data_offset": 0, 00:19:37.233 "data_size": 65536 00:19:37.233 } 00:19:37.233 ] 00:19:37.233 }' 00:19:37.233 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.233 23:35:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.798 23:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:38.058 [2024-05-14 23:35:01.211428] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.058 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.315 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.316 "name": "Existed_Raid", 00:19:38.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.316 "strip_size_kb": 64, 00:19:38.316 "state": "configuring", 00:19:38.316 "raid_level": "concat", 00:19:38.316 "superblock": false, 00:19:38.316 "num_base_bdevs": 4, 00:19:38.316 "num_base_bdevs_discovered": 2, 00:19:38.316 "num_base_bdevs_operational": 4, 00:19:38.316 "base_bdevs_list": [ 00:19:38.316 { 00:19:38.316 "name": "BaseBdev1", 00:19:38.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.316 "is_configured": false, 00:19:38.316 "data_offset": 0, 00:19:38.316 "data_size": 0 00:19:38.316 }, 00:19:38.316 { 00:19:38.316 "name": null, 00:19:38.316 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:38.316 "is_configured": false, 00:19:38.316 "data_offset": 0, 00:19:38.316 "data_size": 65536 00:19:38.316 }, 00:19:38.316 { 00:19:38.316 "name": "BaseBdev3", 00:19:38.316 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:38.316 "is_configured": true, 00:19:38.316 "data_offset": 0, 00:19:38.316 "data_size": 65536 00:19:38.316 }, 00:19:38.316 { 00:19:38.316 "name": "BaseBdev4", 00:19:38.316 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:38.316 "is_configured": true, 00:19:38.316 "data_offset": 0, 00:19:38.316 "data_size": 65536 00:19:38.316 } 00:19:38.316 ] 00:19:38.316 }' 00:19:38.316 23:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.316 23:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.881 23:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.881 23:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:39.140 23:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:19:39.140 23:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:39.398 [2024-05-14 23:35:02.646010] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.398 BaseBdev1 00:19:39.398 23:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:19:39.398 23:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:39.398 23:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:39.398 23:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:39.398 23:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:39.398 23:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:39.398 23:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:39.655 23:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:39.913 [ 00:19:39.913 { 00:19:39.913 "name": "BaseBdev1", 00:19:39.913 "aliases": [ 00:19:39.913 "e9feb578-c0cc-45f9-be6a-43b0f2672829" 00:19:39.913 ], 00:19:39.913 "product_name": "Malloc disk", 00:19:39.913 "block_size": 512, 00:19:39.913 "num_blocks": 65536, 00:19:39.913 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:39.913 "assigned_rate_limits": { 00:19:39.913 "rw_ios_per_sec": 0, 00:19:39.913 "rw_mbytes_per_sec": 0, 00:19:39.913 "r_mbytes_per_sec": 0, 00:19:39.913 "w_mbytes_per_sec": 0 00:19:39.913 }, 00:19:39.913 "claimed": true, 00:19:39.913 "claim_type": "exclusive_write", 00:19:39.913 "zoned": false, 00:19:39.913 "supported_io_types": { 00:19:39.913 "read": true, 00:19:39.913 "write": true, 00:19:39.913 "unmap": true, 00:19:39.913 "write_zeroes": true, 00:19:39.913 "flush": true, 00:19:39.913 "reset": true, 00:19:39.913 "compare": false, 00:19:39.913 "compare_and_write": false, 00:19:39.913 "abort": true, 00:19:39.913 "nvme_admin": false, 00:19:39.913 "nvme_io": false 00:19:39.913 }, 00:19:39.913 "memory_domains": [ 00:19:39.914 { 00:19:39.914 "dma_device_id": "system", 00:19:39.914 "dma_device_type": 1 00:19:39.914 }, 00:19:39.914 { 00:19:39.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.914 "dma_device_type": 2 00:19:39.914 } 00:19:39.914 ], 00:19:39.914 "driver_specific": {} 00:19:39.914 } 00:19:39.914 ] 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.914 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.172 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.172 "name": "Existed_Raid", 00:19:40.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.172 "strip_size_kb": 64, 00:19:40.172 "state": "configuring", 00:19:40.172 "raid_level": "concat", 00:19:40.172 "superblock": false, 00:19:40.172 "num_base_bdevs": 4, 00:19:40.172 "num_base_bdevs_discovered": 3, 00:19:40.172 "num_base_bdevs_operational": 4, 00:19:40.172 "base_bdevs_list": [ 00:19:40.172 { 00:19:40.172 "name": "BaseBdev1", 00:19:40.172 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:40.172 "is_configured": true, 00:19:40.172 "data_offset": 0, 00:19:40.172 "data_size": 65536 00:19:40.172 }, 00:19:40.172 { 00:19:40.172 "name": null, 00:19:40.172 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:40.172 "is_configured": false, 00:19:40.172 "data_offset": 0, 00:19:40.172 "data_size": 65536 00:19:40.172 }, 00:19:40.172 { 00:19:40.172 "name": "BaseBdev3", 00:19:40.172 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:40.172 "is_configured": true, 00:19:40.172 "data_offset": 0, 00:19:40.172 "data_size": 65536 00:19:40.172 }, 00:19:40.172 { 00:19:40.172 "name": "BaseBdev4", 00:19:40.172 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:40.172 "is_configured": true, 00:19:40.172 "data_offset": 0, 00:19:40.172 "data_size": 65536 00:19:40.172 } 00:19:40.172 ] 00:19:40.172 }' 00:19:40.172 23:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.172 23:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.104 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:41.104 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.104 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:41.104 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:41.362 [2024-05-14 23:35:04.458396] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.362 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.619 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.619 "name": "Existed_Raid", 00:19:41.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.619 "strip_size_kb": 64, 00:19:41.619 "state": "configuring", 00:19:41.619 "raid_level": "concat", 00:19:41.619 "superblock": false, 00:19:41.619 "num_base_bdevs": 4, 00:19:41.619 "num_base_bdevs_discovered": 2, 00:19:41.619 "num_base_bdevs_operational": 4, 00:19:41.619 "base_bdevs_list": [ 00:19:41.619 { 00:19:41.619 "name": "BaseBdev1", 00:19:41.619 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:41.619 "is_configured": true, 00:19:41.619 "data_offset": 0, 00:19:41.619 "data_size": 65536 00:19:41.619 }, 00:19:41.619 { 00:19:41.619 "name": null, 00:19:41.619 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:41.619 "is_configured": false, 00:19:41.619 "data_offset": 0, 00:19:41.619 "data_size": 65536 00:19:41.619 }, 00:19:41.619 { 00:19:41.619 "name": null, 00:19:41.619 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:41.619 "is_configured": false, 00:19:41.619 "data_offset": 0, 00:19:41.619 "data_size": 65536 00:19:41.619 }, 00:19:41.619 { 00:19:41.619 "name": "BaseBdev4", 00:19:41.619 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:41.619 "is_configured": true, 00:19:41.619 "data_offset": 0, 00:19:41.619 "data_size": 65536 00:19:41.619 } 00:19:41.619 ] 00:19:41.619 }' 00:19:41.619 23:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.619 23:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.185 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.185 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:42.443 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:19:42.443 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:42.701 [2024-05-14 23:35:05.834638] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:42.701 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:42.701 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.701 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:42.701 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:42.701 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:42.702 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.702 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.702 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.702 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.702 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.702 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.702 23:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.960 23:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.960 "name": "Existed_Raid", 00:19:42.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.960 "strip_size_kb": 64, 00:19:42.960 "state": "configuring", 00:19:42.960 "raid_level": "concat", 00:19:42.960 "superblock": false, 00:19:42.960 "num_base_bdevs": 4, 00:19:42.960 "num_base_bdevs_discovered": 3, 00:19:42.960 "num_base_bdevs_operational": 4, 00:19:42.960 "base_bdevs_list": [ 00:19:42.960 { 00:19:42.960 "name": "BaseBdev1", 00:19:42.960 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:42.960 "is_configured": true, 00:19:42.960 "data_offset": 0, 00:19:42.960 "data_size": 65536 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "name": null, 00:19:42.960 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:42.960 "is_configured": false, 00:19:42.960 "data_offset": 0, 00:19:42.960 "data_size": 65536 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "name": "BaseBdev3", 00:19:42.960 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:42.960 "is_configured": true, 00:19:42.960 "data_offset": 0, 00:19:42.960 "data_size": 65536 00:19:42.960 }, 00:19:42.960 { 00:19:42.960 "name": "BaseBdev4", 00:19:42.960 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:42.960 "is_configured": true, 00:19:42.960 "data_offset": 0, 00:19:42.960 "data_size": 65536 00:19:42.960 } 00:19:42.960 ] 00:19:42.960 }' 00:19:42.960 23:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.960 23:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.526 23:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.526 23:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.784 23:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:19:43.784 23:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:44.043 [2024-05-14 23:35:07.122838] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.043 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.301 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.301 "name": "Existed_Raid", 00:19:44.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.301 "strip_size_kb": 64, 00:19:44.301 "state": "configuring", 00:19:44.301 "raid_level": "concat", 00:19:44.301 "superblock": false, 00:19:44.301 "num_base_bdevs": 4, 00:19:44.301 "num_base_bdevs_discovered": 2, 00:19:44.301 "num_base_bdevs_operational": 4, 00:19:44.301 "base_bdevs_list": [ 00:19:44.301 { 00:19:44.301 "name": null, 00:19:44.301 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:44.301 "is_configured": false, 00:19:44.301 "data_offset": 0, 00:19:44.301 "data_size": 65536 00:19:44.301 }, 00:19:44.301 { 00:19:44.301 "name": null, 00:19:44.301 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:44.301 "is_configured": false, 00:19:44.301 "data_offset": 0, 00:19:44.301 "data_size": 65536 00:19:44.301 }, 00:19:44.301 { 00:19:44.301 "name": "BaseBdev3", 00:19:44.301 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:44.301 "is_configured": true, 00:19:44.301 "data_offset": 0, 00:19:44.301 "data_size": 65536 00:19:44.301 }, 00:19:44.301 { 00:19:44.301 "name": "BaseBdev4", 00:19:44.301 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:44.301 "is_configured": true, 00:19:44.301 "data_offset": 0, 00:19:44.301 "data_size": 65536 00:19:44.301 } 00:19:44.301 ] 00:19:44.301 }' 00:19:44.301 23:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.301 23:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.868 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.868 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:45.125 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:19:45.125 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:45.383 [2024-05-14 23:35:08.514057] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.383 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.640 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.640 "name": "Existed_Raid", 00:19:45.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.640 "strip_size_kb": 64, 00:19:45.640 "state": "configuring", 00:19:45.640 "raid_level": "concat", 00:19:45.641 "superblock": false, 00:19:45.641 "num_base_bdevs": 4, 00:19:45.641 "num_base_bdevs_discovered": 3, 00:19:45.641 "num_base_bdevs_operational": 4, 00:19:45.641 "base_bdevs_list": [ 00:19:45.641 { 00:19:45.641 "name": null, 00:19:45.641 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:45.641 "is_configured": false, 00:19:45.641 "data_offset": 0, 00:19:45.641 "data_size": 65536 00:19:45.641 }, 00:19:45.641 { 00:19:45.641 "name": "BaseBdev2", 00:19:45.641 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:45.641 "is_configured": true, 00:19:45.641 "data_offset": 0, 00:19:45.641 "data_size": 65536 00:19:45.641 }, 00:19:45.641 { 00:19:45.641 "name": "BaseBdev3", 00:19:45.641 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:45.641 "is_configured": true, 00:19:45.641 "data_offset": 0, 00:19:45.641 "data_size": 65536 00:19:45.641 }, 00:19:45.641 { 00:19:45.641 "name": "BaseBdev4", 00:19:45.641 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:45.641 "is_configured": true, 00:19:45.641 "data_offset": 0, 00:19:45.641 "data_size": 65536 00:19:45.641 } 00:19:45.641 ] 00:19:45.641 }' 00:19:45.641 23:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.641 23:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.207 23:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.207 23:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:46.466 23:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:19:46.466 23:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.466 23:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:46.724 23:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e9feb578-c0cc-45f9-be6a-43b0f2672829 00:19:46.983 [2024-05-14 23:35:10.215429] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:46.983 [2024-05-14 23:35:10.215470] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:19:46.983 [2024-05-14 23:35:10.215480] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:46.983 [2024-05-14 23:35:10.215610] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:46.983 [2024-05-14 23:35:10.215831] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:19:46.983 [2024-05-14 23:35:10.215850] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:19:46.983 [2024-05-14 23:35:10.216087] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.983 NewBaseBdev 00:19:46.983 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:19:46.983 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:19:46.983 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:46.983 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:46.983 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:46.983 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:46.983 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.240 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:47.500 [ 00:19:47.500 { 00:19:47.500 "name": "NewBaseBdev", 00:19:47.500 "aliases": [ 00:19:47.500 "e9feb578-c0cc-45f9-be6a-43b0f2672829" 00:19:47.500 ], 00:19:47.500 "product_name": "Malloc disk", 00:19:47.500 "block_size": 512, 00:19:47.500 "num_blocks": 65536, 00:19:47.500 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:47.500 "assigned_rate_limits": { 00:19:47.500 "rw_ios_per_sec": 0, 00:19:47.500 "rw_mbytes_per_sec": 0, 00:19:47.500 "r_mbytes_per_sec": 0, 00:19:47.500 "w_mbytes_per_sec": 0 00:19:47.500 }, 00:19:47.500 "claimed": true, 00:19:47.500 "claim_type": "exclusive_write", 00:19:47.500 "zoned": false, 00:19:47.500 "supported_io_types": { 00:19:47.500 "read": true, 00:19:47.500 "write": true, 00:19:47.500 "unmap": true, 00:19:47.500 "write_zeroes": true, 00:19:47.500 "flush": true, 00:19:47.500 "reset": true, 00:19:47.500 "compare": false, 00:19:47.500 "compare_and_write": false, 00:19:47.500 "abort": true, 00:19:47.500 "nvme_admin": false, 00:19:47.500 "nvme_io": false 00:19:47.500 }, 00:19:47.500 "memory_domains": [ 00:19:47.500 { 00:19:47.500 "dma_device_id": "system", 00:19:47.500 "dma_device_type": 1 00:19:47.500 }, 00:19:47.500 { 00:19:47.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.500 "dma_device_type": 2 00:19:47.500 } 00:19:47.500 ], 00:19:47.500 "driver_specific": {} 00:19:47.500 } 00:19:47.500 ] 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.500 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.759 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.759 "name": "Existed_Raid", 00:19:47.759 "uuid": "2b738783-4177-4c1b-bf26-ff811788b591", 00:19:47.759 "strip_size_kb": 64, 00:19:47.759 "state": "online", 00:19:47.759 "raid_level": "concat", 00:19:47.759 "superblock": false, 00:19:47.759 "num_base_bdevs": 4, 00:19:47.759 "num_base_bdevs_discovered": 4, 00:19:47.759 "num_base_bdevs_operational": 4, 00:19:47.759 "base_bdevs_list": [ 00:19:47.759 { 00:19:47.759 "name": "NewBaseBdev", 00:19:47.759 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:47.759 "is_configured": true, 00:19:47.759 "data_offset": 0, 00:19:47.759 "data_size": 65536 00:19:47.759 }, 00:19:47.759 { 00:19:47.759 "name": "BaseBdev2", 00:19:47.759 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:47.759 "is_configured": true, 00:19:47.759 "data_offset": 0, 00:19:47.759 "data_size": 65536 00:19:47.759 }, 00:19:47.759 { 00:19:47.759 "name": "BaseBdev3", 00:19:47.759 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:47.759 "is_configured": true, 00:19:47.759 "data_offset": 0, 00:19:47.759 "data_size": 65536 00:19:47.759 }, 00:19:47.759 { 00:19:47.759 "name": "BaseBdev4", 00:19:47.759 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:47.759 "is_configured": true, 00:19:47.759 "data_offset": 0, 00:19:47.759 "data_size": 65536 00:19:47.759 } 00:19:47.759 ] 00:19:47.759 }' 00:19:47.759 23:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.759 23:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.326 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:19:48.326 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:48.326 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:48.326 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:48.326 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:48.326 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:48.326 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:48.326 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:48.587 [2024-05-14 23:35:11.768119] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.587 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:48.587 "name": "Existed_Raid", 00:19:48.587 "aliases": [ 00:19:48.587 "2b738783-4177-4c1b-bf26-ff811788b591" 00:19:48.587 ], 00:19:48.587 "product_name": "Raid Volume", 00:19:48.587 "block_size": 512, 00:19:48.587 "num_blocks": 262144, 00:19:48.587 "uuid": "2b738783-4177-4c1b-bf26-ff811788b591", 00:19:48.587 "assigned_rate_limits": { 00:19:48.587 "rw_ios_per_sec": 0, 00:19:48.587 "rw_mbytes_per_sec": 0, 00:19:48.587 "r_mbytes_per_sec": 0, 00:19:48.587 "w_mbytes_per_sec": 0 00:19:48.587 }, 00:19:48.587 "claimed": false, 00:19:48.587 "zoned": false, 00:19:48.587 "supported_io_types": { 00:19:48.587 "read": true, 00:19:48.587 "write": true, 00:19:48.587 "unmap": true, 00:19:48.587 "write_zeroes": true, 00:19:48.587 "flush": true, 00:19:48.587 "reset": true, 00:19:48.587 "compare": false, 00:19:48.587 "compare_and_write": false, 00:19:48.587 "abort": false, 00:19:48.587 "nvme_admin": false, 00:19:48.587 "nvme_io": false 00:19:48.587 }, 00:19:48.587 "memory_domains": [ 00:19:48.587 { 00:19:48.587 "dma_device_id": "system", 00:19:48.587 "dma_device_type": 1 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.588 "dma_device_type": 2 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "dma_device_id": "system", 00:19:48.588 "dma_device_type": 1 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.588 "dma_device_type": 2 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "dma_device_id": "system", 00:19:48.588 "dma_device_type": 1 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.588 "dma_device_type": 2 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "dma_device_id": "system", 00:19:48.588 "dma_device_type": 1 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.588 "dma_device_type": 2 00:19:48.588 } 00:19:48.588 ], 00:19:48.588 "driver_specific": { 00:19:48.588 "raid": { 00:19:48.588 "uuid": "2b738783-4177-4c1b-bf26-ff811788b591", 00:19:48.588 "strip_size_kb": 64, 00:19:48.588 "state": "online", 00:19:48.588 "raid_level": "concat", 00:19:48.588 "superblock": false, 00:19:48.588 "num_base_bdevs": 4, 00:19:48.588 "num_base_bdevs_discovered": 4, 00:19:48.588 "num_base_bdevs_operational": 4, 00:19:48.588 "base_bdevs_list": [ 00:19:48.588 { 00:19:48.588 "name": "NewBaseBdev", 00:19:48.588 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:48.588 "is_configured": true, 00:19:48.588 "data_offset": 0, 00:19:48.588 "data_size": 65536 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "name": "BaseBdev2", 00:19:48.588 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:48.588 "is_configured": true, 00:19:48.588 "data_offset": 0, 00:19:48.588 "data_size": 65536 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "name": "BaseBdev3", 00:19:48.588 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:48.588 "is_configured": true, 00:19:48.588 "data_offset": 0, 00:19:48.588 "data_size": 65536 00:19:48.588 }, 00:19:48.588 { 00:19:48.588 "name": "BaseBdev4", 00:19:48.588 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:48.588 "is_configured": true, 00:19:48.588 "data_offset": 0, 00:19:48.588 "data_size": 65536 00:19:48.588 } 00:19:48.588 ] 00:19:48.588 } 00:19:48.588 } 00:19:48.588 }' 00:19:48.588 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:48.588 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:19:48.588 BaseBdev2 00:19:48.588 BaseBdev3 00:19:48.588 BaseBdev4' 00:19:48.588 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:48.588 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:48.588 23:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:48.848 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:48.848 "name": "NewBaseBdev", 00:19:48.848 "aliases": [ 00:19:48.848 "e9feb578-c0cc-45f9-be6a-43b0f2672829" 00:19:48.848 ], 00:19:48.848 "product_name": "Malloc disk", 00:19:48.848 "block_size": 512, 00:19:48.848 "num_blocks": 65536, 00:19:48.848 "uuid": "e9feb578-c0cc-45f9-be6a-43b0f2672829", 00:19:48.848 "assigned_rate_limits": { 00:19:48.848 "rw_ios_per_sec": 0, 00:19:48.848 "rw_mbytes_per_sec": 0, 00:19:48.848 "r_mbytes_per_sec": 0, 00:19:48.848 "w_mbytes_per_sec": 0 00:19:48.848 }, 00:19:48.848 "claimed": true, 00:19:48.848 "claim_type": "exclusive_write", 00:19:48.848 "zoned": false, 00:19:48.848 "supported_io_types": { 00:19:48.848 "read": true, 00:19:48.848 "write": true, 00:19:48.848 "unmap": true, 00:19:48.848 "write_zeroes": true, 00:19:48.848 "flush": true, 00:19:48.848 "reset": true, 00:19:48.848 "compare": false, 00:19:48.848 "compare_and_write": false, 00:19:48.848 "abort": true, 00:19:48.848 "nvme_admin": false, 00:19:48.848 "nvme_io": false 00:19:48.848 }, 00:19:48.848 "memory_domains": [ 00:19:48.848 { 00:19:48.848 "dma_device_id": "system", 00:19:48.848 "dma_device_type": 1 00:19:48.848 }, 00:19:48.848 { 00:19:48.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.848 "dma_device_type": 2 00:19:48.848 } 00:19:48.848 ], 00:19:48.848 "driver_specific": {} 00:19:48.848 }' 00:19:48.848 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:48.848 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:49.106 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:49.106 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:49.106 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:49.106 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.106 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:49.106 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:49.405 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:49.405 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:49.405 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:49.405 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:49.405 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:49.405 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:49.405 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:49.687 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:49.687 "name": "BaseBdev2", 00:19:49.687 "aliases": [ 00:19:49.687 "5ebd42f4-0b8f-489b-8adf-68476a3ed086" 00:19:49.687 ], 00:19:49.687 "product_name": "Malloc disk", 00:19:49.687 "block_size": 512, 00:19:49.687 "num_blocks": 65536, 00:19:49.687 "uuid": "5ebd42f4-0b8f-489b-8adf-68476a3ed086", 00:19:49.687 "assigned_rate_limits": { 00:19:49.687 "rw_ios_per_sec": 0, 00:19:49.687 "rw_mbytes_per_sec": 0, 00:19:49.687 "r_mbytes_per_sec": 0, 00:19:49.687 "w_mbytes_per_sec": 0 00:19:49.687 }, 00:19:49.687 "claimed": true, 00:19:49.687 "claim_type": "exclusive_write", 00:19:49.687 "zoned": false, 00:19:49.687 "supported_io_types": { 00:19:49.687 "read": true, 00:19:49.687 "write": true, 00:19:49.687 "unmap": true, 00:19:49.687 "write_zeroes": true, 00:19:49.687 "flush": true, 00:19:49.687 "reset": true, 00:19:49.687 "compare": false, 00:19:49.687 "compare_and_write": false, 00:19:49.687 "abort": true, 00:19:49.687 "nvme_admin": false, 00:19:49.687 "nvme_io": false 00:19:49.687 }, 00:19:49.687 "memory_domains": [ 00:19:49.687 { 00:19:49.687 "dma_device_id": "system", 00:19:49.687 "dma_device_type": 1 00:19:49.687 }, 00:19:49.687 { 00:19:49.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.687 "dma_device_type": 2 00:19:49.687 } 00:19:49.687 ], 00:19:49.687 "driver_specific": {} 00:19:49.687 }' 00:19:49.687 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:49.687 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:49.687 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:49.687 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:49.687 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:49.687 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.687 23:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:49.946 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:49.946 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:49.946 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:49.946 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:49.946 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:49.946 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:49.946 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:49.946 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:50.205 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:50.205 "name": "BaseBdev3", 00:19:50.205 "aliases": [ 00:19:50.205 "0de6ae00-6e26-4eac-84ef-20fe3c060c2a" 00:19:50.205 ], 00:19:50.205 "product_name": "Malloc disk", 00:19:50.205 "block_size": 512, 00:19:50.205 "num_blocks": 65536, 00:19:50.205 "uuid": "0de6ae00-6e26-4eac-84ef-20fe3c060c2a", 00:19:50.205 "assigned_rate_limits": { 00:19:50.205 "rw_ios_per_sec": 0, 00:19:50.205 "rw_mbytes_per_sec": 0, 00:19:50.205 "r_mbytes_per_sec": 0, 00:19:50.205 "w_mbytes_per_sec": 0 00:19:50.205 }, 00:19:50.205 "claimed": true, 00:19:50.205 "claim_type": "exclusive_write", 00:19:50.205 "zoned": false, 00:19:50.205 "supported_io_types": { 00:19:50.205 "read": true, 00:19:50.205 "write": true, 00:19:50.205 "unmap": true, 00:19:50.205 "write_zeroes": true, 00:19:50.205 "flush": true, 00:19:50.205 "reset": true, 00:19:50.205 "compare": false, 00:19:50.205 "compare_and_write": false, 00:19:50.205 "abort": true, 00:19:50.205 "nvme_admin": false, 00:19:50.205 "nvme_io": false 00:19:50.205 }, 00:19:50.205 "memory_domains": [ 00:19:50.205 { 00:19:50.205 "dma_device_id": "system", 00:19:50.205 "dma_device_type": 1 00:19:50.205 }, 00:19:50.205 { 00:19:50.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.205 "dma_device_type": 2 00:19:50.205 } 00:19:50.205 ], 00:19:50.205 "driver_specific": {} 00:19:50.205 }' 00:19:50.205 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:50.205 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:50.463 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:50.463 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:50.463 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:50.463 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.463 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:50.463 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:50.721 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.721 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:50.721 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:50.721 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:50.721 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:50.721 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:50.721 23:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:50.980 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:50.980 "name": "BaseBdev4", 00:19:50.980 "aliases": [ 00:19:50.980 "f585f913-5ad6-4711-925a-5f1dbcf840cb" 00:19:50.980 ], 00:19:50.980 "product_name": "Malloc disk", 00:19:50.980 "block_size": 512, 00:19:50.980 "num_blocks": 65536, 00:19:50.980 "uuid": "f585f913-5ad6-4711-925a-5f1dbcf840cb", 00:19:50.980 "assigned_rate_limits": { 00:19:50.980 "rw_ios_per_sec": 0, 00:19:50.980 "rw_mbytes_per_sec": 0, 00:19:50.980 "r_mbytes_per_sec": 0, 00:19:50.980 "w_mbytes_per_sec": 0 00:19:50.980 }, 00:19:50.980 "claimed": true, 00:19:50.980 "claim_type": "exclusive_write", 00:19:50.980 "zoned": false, 00:19:50.980 "supported_io_types": { 00:19:50.980 "read": true, 00:19:50.980 "write": true, 00:19:50.980 "unmap": true, 00:19:50.980 "write_zeroes": true, 00:19:50.980 "flush": true, 00:19:50.980 "reset": true, 00:19:50.980 "compare": false, 00:19:50.980 "compare_and_write": false, 00:19:50.980 "abort": true, 00:19:50.980 "nvme_admin": false, 00:19:50.980 "nvme_io": false 00:19:50.980 }, 00:19:50.980 "memory_domains": [ 00:19:50.980 { 00:19:50.980 "dma_device_id": "system", 00:19:50.980 "dma_device_type": 1 00:19:50.980 }, 00:19:50.980 { 00:19:50.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.980 "dma_device_type": 2 00:19:50.980 } 00:19:50.980 ], 00:19:50.980 "driver_specific": {} 00:19:50.980 }' 00:19:50.980 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:50.980 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:51.239 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:51.239 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:51.239 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:51.239 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:51.239 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:51.239 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:51.239 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:51.239 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:51.497 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:51.497 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:51.497 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:51.756 [2024-05-14 23:35:14.856407] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.756 [2024-05-14 23:35:14.856439] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.756 [2024-05-14 23:35:14.856522] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.756 [2024-05-14 23:35:14.856568] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.756 [2024-05-14 23:35:14.856579] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:19:51.756 23:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 66982 00:19:51.756 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 66982 ']' 00:19:51.756 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 66982 00:19:51.756 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:19:51.756 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:51.756 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66982 00:19:51.757 killing process with pid 66982 00:19:51.757 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:51.757 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:51.757 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66982' 00:19:51.757 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 66982 00:19:51.757 23:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 66982 00:19:51.757 [2024-05-14 23:35:14.894834] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.015 [2024-05-14 23:35:15.207866] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:19:53.390 00:19:53.390 real 0m35.350s 00:19:53.390 user 1m6.666s 00:19:53.390 sys 0m3.480s 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.390 ************************************ 00:19:53.390 END TEST raid_state_function_test 00:19:53.390 ************************************ 00:19:53.390 23:35:16 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:53.390 23:35:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:53.390 23:35:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:53.390 23:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.390 ************************************ 00:19:53.390 START TEST raid_state_function_test_sb 00:19:53.390 ************************************ 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:53.390 Process raid pid: 68109 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=68109 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 68109' 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 68109 /var/tmp/spdk-raid.sock 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 68109 ']' 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:53.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:53.390 23:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.390 [2024-05-14 23:35:16.672256] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:19:53.390 [2024-05-14 23:35:16.672449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.648 [2024-05-14 23:35:16.836316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.906 [2024-05-14 23:35:17.072494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.230 [2024-05-14 23:35:17.276510] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:54.489 [2024-05-14 23:35:17.751613] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.489 [2024-05-14 23:35:17.751701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.489 [2024-05-14 23:35:17.751718] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:54.489 [2024-05-14 23:35:17.751739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:54.489 [2024-05-14 23:35:17.751749] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:54.489 [2024-05-14 23:35:17.751793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:54.489 [2024-05-14 23:35:17.751804] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:54.489 [2024-05-14 23:35:17.751828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.489 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.747 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.747 "name": "Existed_Raid", 00:19:54.747 "uuid": "34dab349-3cf5-48bb-81e7-0d1f1b10964e", 00:19:54.747 "strip_size_kb": 64, 00:19:54.747 "state": "configuring", 00:19:54.747 "raid_level": "concat", 00:19:54.747 "superblock": true, 00:19:54.747 "num_base_bdevs": 4, 00:19:54.747 "num_base_bdevs_discovered": 0, 00:19:54.747 "num_base_bdevs_operational": 4, 00:19:54.747 "base_bdevs_list": [ 00:19:54.747 { 00:19:54.747 "name": "BaseBdev1", 00:19:54.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.747 "is_configured": false, 00:19:54.747 "data_offset": 0, 00:19:54.747 "data_size": 0 00:19:54.747 }, 00:19:54.747 { 00:19:54.747 "name": "BaseBdev2", 00:19:54.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.747 "is_configured": false, 00:19:54.747 "data_offset": 0, 00:19:54.747 "data_size": 0 00:19:54.747 }, 00:19:54.747 { 00:19:54.747 "name": "BaseBdev3", 00:19:54.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.747 "is_configured": false, 00:19:54.747 "data_offset": 0, 00:19:54.747 "data_size": 0 00:19:54.747 }, 00:19:54.747 { 00:19:54.747 "name": "BaseBdev4", 00:19:54.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.747 "is_configured": false, 00:19:54.747 "data_offset": 0, 00:19:54.747 "data_size": 0 00:19:54.747 } 00:19:54.747 ] 00:19:54.747 }' 00:19:54.747 23:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.747 23:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.680 23:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:55.680 [2024-05-14 23:35:18.844949] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:55.680 [2024-05-14 23:35:18.845022] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:19:55.681 23:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:55.938 [2024-05-14 23:35:19.089049] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:55.938 [2024-05-14 23:35:19.089125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:55.939 [2024-05-14 23:35:19.089155] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:55.939 [2024-05-14 23:35:19.089420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:55.939 [2024-05-14 23:35:19.089440] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:55.939 [2024-05-14 23:35:19.089462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:55.939 [2024-05-14 23:35:19.089471] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:55.939 [2024-05-14 23:35:19.089499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:55.939 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:56.198 BaseBdev1 00:19:56.198 [2024-05-14 23:35:19.331414] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.198 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:19:56.198 23:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:56.198 23:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:56.198 23:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:56.198 23:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:56.198 23:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:56.198 23:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:56.456 23:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:56.715 [ 00:19:56.715 { 00:19:56.716 "name": "BaseBdev1", 00:19:56.716 "aliases": [ 00:19:56.716 "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0" 00:19:56.716 ], 00:19:56.716 "product_name": "Malloc disk", 00:19:56.716 "block_size": 512, 00:19:56.716 "num_blocks": 65536, 00:19:56.716 "uuid": "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0", 00:19:56.716 "assigned_rate_limits": { 00:19:56.716 "rw_ios_per_sec": 0, 00:19:56.716 "rw_mbytes_per_sec": 0, 00:19:56.716 "r_mbytes_per_sec": 0, 00:19:56.716 "w_mbytes_per_sec": 0 00:19:56.716 }, 00:19:56.716 "claimed": true, 00:19:56.716 "claim_type": "exclusive_write", 00:19:56.716 "zoned": false, 00:19:56.716 "supported_io_types": { 00:19:56.716 "read": true, 00:19:56.716 "write": true, 00:19:56.716 "unmap": true, 00:19:56.716 "write_zeroes": true, 00:19:56.716 "flush": true, 00:19:56.716 "reset": true, 00:19:56.716 "compare": false, 00:19:56.716 "compare_and_write": false, 00:19:56.716 "abort": true, 00:19:56.716 "nvme_admin": false, 00:19:56.716 "nvme_io": false 00:19:56.716 }, 00:19:56.716 "memory_domains": [ 00:19:56.716 { 00:19:56.716 "dma_device_id": "system", 00:19:56.716 "dma_device_type": 1 00:19:56.716 }, 00:19:56.716 { 00:19:56.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.716 "dma_device_type": 2 00:19:56.716 } 00:19:56.716 ], 00:19:56.716 "driver_specific": {} 00:19:56.716 } 00:19:56.716 ] 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.716 23:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.975 23:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.975 "name": "Existed_Raid", 00:19:56.975 "uuid": "9cc95e9f-9652-420a-acf0-d91040693dbb", 00:19:56.975 "strip_size_kb": 64, 00:19:56.975 "state": "configuring", 00:19:56.975 "raid_level": "concat", 00:19:56.975 "superblock": true, 00:19:56.975 "num_base_bdevs": 4, 00:19:56.975 "num_base_bdevs_discovered": 1, 00:19:56.975 "num_base_bdevs_operational": 4, 00:19:56.975 "base_bdevs_list": [ 00:19:56.975 { 00:19:56.975 "name": "BaseBdev1", 00:19:56.975 "uuid": "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0", 00:19:56.975 "is_configured": true, 00:19:56.975 "data_offset": 2048, 00:19:56.975 "data_size": 63488 00:19:56.975 }, 00:19:56.975 { 00:19:56.975 "name": "BaseBdev2", 00:19:56.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.975 "is_configured": false, 00:19:56.975 "data_offset": 0, 00:19:56.975 "data_size": 0 00:19:56.975 }, 00:19:56.975 { 00:19:56.975 "name": "BaseBdev3", 00:19:56.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.975 "is_configured": false, 00:19:56.975 "data_offset": 0, 00:19:56.975 "data_size": 0 00:19:56.975 }, 00:19:56.975 { 00:19:56.975 "name": "BaseBdev4", 00:19:56.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.975 "is_configured": false, 00:19:56.975 "data_offset": 0, 00:19:56.975 "data_size": 0 00:19:56.975 } 00:19:56.975 ] 00:19:56.975 }' 00:19:56.975 23:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.975 23:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.542 23:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:57.800 [2024-05-14 23:35:20.993027] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.800 [2024-05-14 23:35:20.993095] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:19:57.800 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:58.058 [2024-05-14 23:35:21.245097] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:58.058 [2024-05-14 23:35:21.246710] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:58.058 [2024-05-14 23:35:21.246812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:58.058 [2024-05-14 23:35:21.246837] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:58.058 [2024-05-14 23:35:21.246872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:58.058 [2024-05-14 23:35:21.246883] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:58.058 [2024-05-14 23:35:21.246900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.058 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.316 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.316 "name": "Existed_Raid", 00:19:58.316 "uuid": "c8007e2a-d7b8-4663-b144-8d94690eee30", 00:19:58.316 "strip_size_kb": 64, 00:19:58.316 "state": "configuring", 00:19:58.316 "raid_level": "concat", 00:19:58.316 "superblock": true, 00:19:58.316 "num_base_bdevs": 4, 00:19:58.316 "num_base_bdevs_discovered": 1, 00:19:58.316 "num_base_bdevs_operational": 4, 00:19:58.316 "base_bdevs_list": [ 00:19:58.316 { 00:19:58.316 "name": "BaseBdev1", 00:19:58.316 "uuid": "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0", 00:19:58.316 "is_configured": true, 00:19:58.316 "data_offset": 2048, 00:19:58.316 "data_size": 63488 00:19:58.316 }, 00:19:58.316 { 00:19:58.316 "name": "BaseBdev2", 00:19:58.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.316 "is_configured": false, 00:19:58.316 "data_offset": 0, 00:19:58.316 "data_size": 0 00:19:58.316 }, 00:19:58.316 { 00:19:58.316 "name": "BaseBdev3", 00:19:58.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.316 "is_configured": false, 00:19:58.316 "data_offset": 0, 00:19:58.316 "data_size": 0 00:19:58.316 }, 00:19:58.316 { 00:19:58.316 "name": "BaseBdev4", 00:19:58.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.316 "is_configured": false, 00:19:58.316 "data_offset": 0, 00:19:58.316 "data_size": 0 00:19:58.316 } 00:19:58.316 ] 00:19:58.316 }' 00:19:58.316 23:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.316 23:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.265 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:59.265 BaseBdev2 00:19:59.265 [2024-05-14 23:35:22.479389] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.265 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:19:59.265 23:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:59.265 23:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:59.265 23:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:59.265 23:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:59.265 23:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:59.265 23:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:59.523 23:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:59.782 [ 00:19:59.782 { 00:19:59.782 "name": "BaseBdev2", 00:19:59.782 "aliases": [ 00:19:59.782 "334151e2-095d-4262-8392-b6166aff61fe" 00:19:59.782 ], 00:19:59.782 "product_name": "Malloc disk", 00:19:59.782 "block_size": 512, 00:19:59.782 "num_blocks": 65536, 00:19:59.782 "uuid": "334151e2-095d-4262-8392-b6166aff61fe", 00:19:59.782 "assigned_rate_limits": { 00:19:59.782 "rw_ios_per_sec": 0, 00:19:59.782 "rw_mbytes_per_sec": 0, 00:19:59.782 "r_mbytes_per_sec": 0, 00:19:59.782 "w_mbytes_per_sec": 0 00:19:59.782 }, 00:19:59.782 "claimed": true, 00:19:59.782 "claim_type": "exclusive_write", 00:19:59.782 "zoned": false, 00:19:59.782 "supported_io_types": { 00:19:59.782 "read": true, 00:19:59.782 "write": true, 00:19:59.782 "unmap": true, 00:19:59.782 "write_zeroes": true, 00:19:59.782 "flush": true, 00:19:59.782 "reset": true, 00:19:59.782 "compare": false, 00:19:59.782 "compare_and_write": false, 00:19:59.782 "abort": true, 00:19:59.782 "nvme_admin": false, 00:19:59.782 "nvme_io": false 00:19:59.782 }, 00:19:59.782 "memory_domains": [ 00:19:59.782 { 00:19:59.782 "dma_device_id": "system", 00:19:59.782 "dma_device_type": 1 00:19:59.782 }, 00:19:59.782 { 00:19:59.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.782 "dma_device_type": 2 00:19:59.782 } 00:19:59.782 ], 00:19:59.782 "driver_specific": {} 00:19:59.782 } 00:19:59.782 ] 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.782 23:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.041 23:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.041 "name": "Existed_Raid", 00:20:00.041 "uuid": "c8007e2a-d7b8-4663-b144-8d94690eee30", 00:20:00.041 "strip_size_kb": 64, 00:20:00.041 "state": "configuring", 00:20:00.041 "raid_level": "concat", 00:20:00.041 "superblock": true, 00:20:00.041 "num_base_bdevs": 4, 00:20:00.041 "num_base_bdevs_discovered": 2, 00:20:00.041 "num_base_bdevs_operational": 4, 00:20:00.041 "base_bdevs_list": [ 00:20:00.041 { 00:20:00.041 "name": "BaseBdev1", 00:20:00.041 "uuid": "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0", 00:20:00.041 "is_configured": true, 00:20:00.041 "data_offset": 2048, 00:20:00.041 "data_size": 63488 00:20:00.041 }, 00:20:00.041 { 00:20:00.041 "name": "BaseBdev2", 00:20:00.041 "uuid": "334151e2-095d-4262-8392-b6166aff61fe", 00:20:00.041 "is_configured": true, 00:20:00.041 "data_offset": 2048, 00:20:00.041 "data_size": 63488 00:20:00.041 }, 00:20:00.041 { 00:20:00.041 "name": "BaseBdev3", 00:20:00.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.041 "is_configured": false, 00:20:00.041 "data_offset": 0, 00:20:00.041 "data_size": 0 00:20:00.041 }, 00:20:00.041 { 00:20:00.041 "name": "BaseBdev4", 00:20:00.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.041 "is_configured": false, 00:20:00.041 "data_offset": 0, 00:20:00.041 "data_size": 0 00:20:00.041 } 00:20:00.041 ] 00:20:00.041 }' 00:20:00.041 23:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.041 23:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.608 23:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:00.867 BaseBdev3 00:20:00.867 [2024-05-14 23:35:24.076571] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.867 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:20:00.867 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:00.867 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:00.867 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:00.867 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:00.867 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:00.867 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:01.125 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:01.384 [ 00:20:01.384 { 00:20:01.384 "name": "BaseBdev3", 00:20:01.384 "aliases": [ 00:20:01.384 "99b480bf-a422-4192-92e9-6fa23a9e7acf" 00:20:01.384 ], 00:20:01.384 "product_name": "Malloc disk", 00:20:01.384 "block_size": 512, 00:20:01.384 "num_blocks": 65536, 00:20:01.384 "uuid": "99b480bf-a422-4192-92e9-6fa23a9e7acf", 00:20:01.384 "assigned_rate_limits": { 00:20:01.384 "rw_ios_per_sec": 0, 00:20:01.384 "rw_mbytes_per_sec": 0, 00:20:01.384 "r_mbytes_per_sec": 0, 00:20:01.384 "w_mbytes_per_sec": 0 00:20:01.384 }, 00:20:01.384 "claimed": true, 00:20:01.384 "claim_type": "exclusive_write", 00:20:01.384 "zoned": false, 00:20:01.384 "supported_io_types": { 00:20:01.384 "read": true, 00:20:01.384 "write": true, 00:20:01.384 "unmap": true, 00:20:01.384 "write_zeroes": true, 00:20:01.384 "flush": true, 00:20:01.384 "reset": true, 00:20:01.384 "compare": false, 00:20:01.384 "compare_and_write": false, 00:20:01.384 "abort": true, 00:20:01.384 "nvme_admin": false, 00:20:01.384 "nvme_io": false 00:20:01.384 }, 00:20:01.384 "memory_domains": [ 00:20:01.384 { 00:20:01.384 "dma_device_id": "system", 00:20:01.384 "dma_device_type": 1 00:20:01.384 }, 00:20:01.384 { 00:20:01.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.384 "dma_device_type": 2 00:20:01.384 } 00:20:01.384 ], 00:20:01.384 "driver_specific": {} 00:20:01.384 } 00:20:01.384 ] 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.384 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.643 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.643 "name": "Existed_Raid", 00:20:01.643 "uuid": "c8007e2a-d7b8-4663-b144-8d94690eee30", 00:20:01.643 "strip_size_kb": 64, 00:20:01.643 "state": "configuring", 00:20:01.643 "raid_level": "concat", 00:20:01.643 "superblock": true, 00:20:01.643 "num_base_bdevs": 4, 00:20:01.643 "num_base_bdevs_discovered": 3, 00:20:01.643 "num_base_bdevs_operational": 4, 00:20:01.643 "base_bdevs_list": [ 00:20:01.643 { 00:20:01.643 "name": "BaseBdev1", 00:20:01.643 "uuid": "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0", 00:20:01.643 "is_configured": true, 00:20:01.643 "data_offset": 2048, 00:20:01.643 "data_size": 63488 00:20:01.643 }, 00:20:01.643 { 00:20:01.643 "name": "BaseBdev2", 00:20:01.643 "uuid": "334151e2-095d-4262-8392-b6166aff61fe", 00:20:01.643 "is_configured": true, 00:20:01.643 "data_offset": 2048, 00:20:01.643 "data_size": 63488 00:20:01.643 }, 00:20:01.643 { 00:20:01.643 "name": "BaseBdev3", 00:20:01.643 "uuid": "99b480bf-a422-4192-92e9-6fa23a9e7acf", 00:20:01.643 "is_configured": true, 00:20:01.643 "data_offset": 2048, 00:20:01.643 "data_size": 63488 00:20:01.643 }, 00:20:01.643 { 00:20:01.643 "name": "BaseBdev4", 00:20:01.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.643 "is_configured": false, 00:20:01.643 "data_offset": 0, 00:20:01.643 "data_size": 0 00:20:01.643 } 00:20:01.643 ] 00:20:01.643 }' 00:20:01.643 23:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.643 23:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.212 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:02.471 BaseBdev4 00:20:02.471 [2024-05-14 23:35:25.559585] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:02.471 [2024-05-14 23:35:25.559765] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:20:02.471 [2024-05-14 23:35:25.559781] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:02.471 [2024-05-14 23:35:25.559886] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:02.471 [2024-05-14 23:35:25.560100] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:20:02.471 [2024-05-14 23:35:25.560114] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:20:02.471 [2024-05-14 23:35:25.560528] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.471 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:20:02.471 23:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:20:02.471 23:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:02.471 23:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:02.471 23:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:02.471 23:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:02.471 23:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:02.730 [ 00:20:02.730 { 00:20:02.730 "name": "BaseBdev4", 00:20:02.730 "aliases": [ 00:20:02.730 "150f46fa-89a1-4d6a-9999-0e04dc6d33c1" 00:20:02.730 ], 00:20:02.730 "product_name": "Malloc disk", 00:20:02.730 "block_size": 512, 00:20:02.730 "num_blocks": 65536, 00:20:02.730 "uuid": "150f46fa-89a1-4d6a-9999-0e04dc6d33c1", 00:20:02.730 "assigned_rate_limits": { 00:20:02.730 "rw_ios_per_sec": 0, 00:20:02.730 "rw_mbytes_per_sec": 0, 00:20:02.730 "r_mbytes_per_sec": 0, 00:20:02.730 "w_mbytes_per_sec": 0 00:20:02.730 }, 00:20:02.730 "claimed": true, 00:20:02.730 "claim_type": "exclusive_write", 00:20:02.730 "zoned": false, 00:20:02.730 "supported_io_types": { 00:20:02.730 "read": true, 00:20:02.730 "write": true, 00:20:02.730 "unmap": true, 00:20:02.730 "write_zeroes": true, 00:20:02.730 "flush": true, 00:20:02.730 "reset": true, 00:20:02.730 "compare": false, 00:20:02.730 "compare_and_write": false, 00:20:02.730 "abort": true, 00:20:02.730 "nvme_admin": false, 00:20:02.730 "nvme_io": false 00:20:02.730 }, 00:20:02.730 "memory_domains": [ 00:20:02.730 { 00:20:02.730 "dma_device_id": "system", 00:20:02.730 "dma_device_type": 1 00:20:02.730 }, 00:20:02.730 { 00:20:02.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.730 "dma_device_type": 2 00:20:02.730 } 00:20:02.730 ], 00:20:02.730 "driver_specific": {} 00:20:02.730 } 00:20:02.730 ] 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.730 23:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.988 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.989 "name": "Existed_Raid", 00:20:02.989 "uuid": "c8007e2a-d7b8-4663-b144-8d94690eee30", 00:20:02.989 "strip_size_kb": 64, 00:20:02.989 "state": "online", 00:20:02.989 "raid_level": "concat", 00:20:02.989 "superblock": true, 00:20:02.989 "num_base_bdevs": 4, 00:20:02.989 "num_base_bdevs_discovered": 4, 00:20:02.989 "num_base_bdevs_operational": 4, 00:20:02.989 "base_bdevs_list": [ 00:20:02.989 { 00:20:02.989 "name": "BaseBdev1", 00:20:02.989 "uuid": "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0", 00:20:02.989 "is_configured": true, 00:20:02.989 "data_offset": 2048, 00:20:02.989 "data_size": 63488 00:20:02.989 }, 00:20:02.989 { 00:20:02.989 "name": "BaseBdev2", 00:20:02.989 "uuid": "334151e2-095d-4262-8392-b6166aff61fe", 00:20:02.989 "is_configured": true, 00:20:02.989 "data_offset": 2048, 00:20:02.989 "data_size": 63488 00:20:02.989 }, 00:20:02.989 { 00:20:02.989 "name": "BaseBdev3", 00:20:02.989 "uuid": "99b480bf-a422-4192-92e9-6fa23a9e7acf", 00:20:02.989 "is_configured": true, 00:20:02.989 "data_offset": 2048, 00:20:02.989 "data_size": 63488 00:20:02.989 }, 00:20:02.989 { 00:20:02.989 "name": "BaseBdev4", 00:20:02.989 "uuid": "150f46fa-89a1-4d6a-9999-0e04dc6d33c1", 00:20:02.989 "is_configured": true, 00:20:02.989 "data_offset": 2048, 00:20:02.989 "data_size": 63488 00:20:02.989 } 00:20:02.989 ] 00:20:02.989 }' 00:20:02.989 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.989 23:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.925 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:20:03.925 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:03.925 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:03.925 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:03.925 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:03.925 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:20:03.925 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:03.925 23:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:03.925 [2024-05-14 23:35:27.072049] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.925 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:03.925 "name": "Existed_Raid", 00:20:03.925 "aliases": [ 00:20:03.925 "c8007e2a-d7b8-4663-b144-8d94690eee30" 00:20:03.926 ], 00:20:03.926 "product_name": "Raid Volume", 00:20:03.926 "block_size": 512, 00:20:03.926 "num_blocks": 253952, 00:20:03.926 "uuid": "c8007e2a-d7b8-4663-b144-8d94690eee30", 00:20:03.926 "assigned_rate_limits": { 00:20:03.926 "rw_ios_per_sec": 0, 00:20:03.926 "rw_mbytes_per_sec": 0, 00:20:03.926 "r_mbytes_per_sec": 0, 00:20:03.926 "w_mbytes_per_sec": 0 00:20:03.926 }, 00:20:03.926 "claimed": false, 00:20:03.926 "zoned": false, 00:20:03.926 "supported_io_types": { 00:20:03.926 "read": true, 00:20:03.926 "write": true, 00:20:03.926 "unmap": true, 00:20:03.926 "write_zeroes": true, 00:20:03.926 "flush": true, 00:20:03.926 "reset": true, 00:20:03.926 "compare": false, 00:20:03.926 "compare_and_write": false, 00:20:03.926 "abort": false, 00:20:03.926 "nvme_admin": false, 00:20:03.926 "nvme_io": false 00:20:03.926 }, 00:20:03.926 "memory_domains": [ 00:20:03.926 { 00:20:03.926 "dma_device_id": "system", 00:20:03.926 "dma_device_type": 1 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.926 "dma_device_type": 2 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "dma_device_id": "system", 00:20:03.926 "dma_device_type": 1 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.926 "dma_device_type": 2 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "dma_device_id": "system", 00:20:03.926 "dma_device_type": 1 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.926 "dma_device_type": 2 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "dma_device_id": "system", 00:20:03.926 "dma_device_type": 1 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.926 "dma_device_type": 2 00:20:03.926 } 00:20:03.926 ], 00:20:03.926 "driver_specific": { 00:20:03.926 "raid": { 00:20:03.926 "uuid": "c8007e2a-d7b8-4663-b144-8d94690eee30", 00:20:03.926 "strip_size_kb": 64, 00:20:03.926 "state": "online", 00:20:03.926 "raid_level": "concat", 00:20:03.926 "superblock": true, 00:20:03.926 "num_base_bdevs": 4, 00:20:03.926 "num_base_bdevs_discovered": 4, 00:20:03.926 "num_base_bdevs_operational": 4, 00:20:03.926 "base_bdevs_list": [ 00:20:03.926 { 00:20:03.926 "name": "BaseBdev1", 00:20:03.926 "uuid": "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0", 00:20:03.926 "is_configured": true, 00:20:03.926 "data_offset": 2048, 00:20:03.926 "data_size": 63488 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "name": "BaseBdev2", 00:20:03.926 "uuid": "334151e2-095d-4262-8392-b6166aff61fe", 00:20:03.926 "is_configured": true, 00:20:03.926 "data_offset": 2048, 00:20:03.926 "data_size": 63488 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "name": "BaseBdev3", 00:20:03.926 "uuid": "99b480bf-a422-4192-92e9-6fa23a9e7acf", 00:20:03.926 "is_configured": true, 00:20:03.926 "data_offset": 2048, 00:20:03.926 "data_size": 63488 00:20:03.926 }, 00:20:03.926 { 00:20:03.926 "name": "BaseBdev4", 00:20:03.926 "uuid": "150f46fa-89a1-4d6a-9999-0e04dc6d33c1", 00:20:03.926 "is_configured": true, 00:20:03.926 "data_offset": 2048, 00:20:03.926 "data_size": 63488 00:20:03.926 } 00:20:03.926 ] 00:20:03.926 } 00:20:03.926 } 00:20:03.926 }' 00:20:03.926 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.926 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:20:03.926 BaseBdev2 00:20:03.926 BaseBdev3 00:20:03.926 BaseBdev4' 00:20:03.926 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:03.926 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:03.926 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:04.185 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:04.185 "name": "BaseBdev1", 00:20:04.185 "aliases": [ 00:20:04.185 "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0" 00:20:04.185 ], 00:20:04.185 "product_name": "Malloc disk", 00:20:04.185 "block_size": 512, 00:20:04.185 "num_blocks": 65536, 00:20:04.185 "uuid": "6b6e944c-48c1-4e80-9d01-54b8a7ef1fc0", 00:20:04.185 "assigned_rate_limits": { 00:20:04.185 "rw_ios_per_sec": 0, 00:20:04.185 "rw_mbytes_per_sec": 0, 00:20:04.185 "r_mbytes_per_sec": 0, 00:20:04.185 "w_mbytes_per_sec": 0 00:20:04.185 }, 00:20:04.185 "claimed": true, 00:20:04.185 "claim_type": "exclusive_write", 00:20:04.185 "zoned": false, 00:20:04.185 "supported_io_types": { 00:20:04.185 "read": true, 00:20:04.185 "write": true, 00:20:04.185 "unmap": true, 00:20:04.185 "write_zeroes": true, 00:20:04.185 "flush": true, 00:20:04.185 "reset": true, 00:20:04.185 "compare": false, 00:20:04.185 "compare_and_write": false, 00:20:04.185 "abort": true, 00:20:04.185 "nvme_admin": false, 00:20:04.185 "nvme_io": false 00:20:04.185 }, 00:20:04.185 "memory_domains": [ 00:20:04.185 { 00:20:04.185 "dma_device_id": "system", 00:20:04.185 "dma_device_type": 1 00:20:04.185 }, 00:20:04.185 { 00:20:04.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.185 "dma_device_type": 2 00:20:04.185 } 00:20:04.185 ], 00:20:04.185 "driver_specific": {} 00:20:04.185 }' 00:20:04.185 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:04.185 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:04.185 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:04.185 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:04.445 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:04.445 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.445 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:04.445 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:04.445 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.445 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:04.445 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:04.704 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:04.704 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:04.704 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:04.704 23:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:04.963 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:04.963 "name": "BaseBdev2", 00:20:04.963 "aliases": [ 00:20:04.963 "334151e2-095d-4262-8392-b6166aff61fe" 00:20:04.963 ], 00:20:04.963 "product_name": "Malloc disk", 00:20:04.963 "block_size": 512, 00:20:04.963 "num_blocks": 65536, 00:20:04.963 "uuid": "334151e2-095d-4262-8392-b6166aff61fe", 00:20:04.963 "assigned_rate_limits": { 00:20:04.963 "rw_ios_per_sec": 0, 00:20:04.963 "rw_mbytes_per_sec": 0, 00:20:04.963 "r_mbytes_per_sec": 0, 00:20:04.963 "w_mbytes_per_sec": 0 00:20:04.963 }, 00:20:04.963 "claimed": true, 00:20:04.963 "claim_type": "exclusive_write", 00:20:04.963 "zoned": false, 00:20:04.963 "supported_io_types": { 00:20:04.963 "read": true, 00:20:04.963 "write": true, 00:20:04.963 "unmap": true, 00:20:04.963 "write_zeroes": true, 00:20:04.963 "flush": true, 00:20:04.963 "reset": true, 00:20:04.963 "compare": false, 00:20:04.963 "compare_and_write": false, 00:20:04.963 "abort": true, 00:20:04.963 "nvme_admin": false, 00:20:04.963 "nvme_io": false 00:20:04.963 }, 00:20:04.963 "memory_domains": [ 00:20:04.963 { 00:20:04.963 "dma_device_id": "system", 00:20:04.963 "dma_device_type": 1 00:20:04.963 }, 00:20:04.963 { 00:20:04.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.963 "dma_device_type": 2 00:20:04.963 } 00:20:04.963 ], 00:20:04.963 "driver_specific": {} 00:20:04.963 }' 00:20:04.963 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:04.963 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:04.963 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:04.963 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:04.963 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:05.222 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.222 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:05.222 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:05.222 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.222 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:05.222 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:05.480 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:05.480 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:05.481 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:05.481 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:05.481 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:05.481 "name": "BaseBdev3", 00:20:05.481 "aliases": [ 00:20:05.481 "99b480bf-a422-4192-92e9-6fa23a9e7acf" 00:20:05.481 ], 00:20:05.481 "product_name": "Malloc disk", 00:20:05.481 "block_size": 512, 00:20:05.481 "num_blocks": 65536, 00:20:05.481 "uuid": "99b480bf-a422-4192-92e9-6fa23a9e7acf", 00:20:05.481 "assigned_rate_limits": { 00:20:05.481 "rw_ios_per_sec": 0, 00:20:05.481 "rw_mbytes_per_sec": 0, 00:20:05.481 "r_mbytes_per_sec": 0, 00:20:05.481 "w_mbytes_per_sec": 0 00:20:05.481 }, 00:20:05.481 "claimed": true, 00:20:05.481 "claim_type": "exclusive_write", 00:20:05.481 "zoned": false, 00:20:05.481 "supported_io_types": { 00:20:05.481 "read": true, 00:20:05.481 "write": true, 00:20:05.481 "unmap": true, 00:20:05.481 "write_zeroes": true, 00:20:05.481 "flush": true, 00:20:05.481 "reset": true, 00:20:05.481 "compare": false, 00:20:05.481 "compare_and_write": false, 00:20:05.481 "abort": true, 00:20:05.481 "nvme_admin": false, 00:20:05.481 "nvme_io": false 00:20:05.481 }, 00:20:05.481 "memory_domains": [ 00:20:05.481 { 00:20:05.481 "dma_device_id": "system", 00:20:05.481 "dma_device_type": 1 00:20:05.481 }, 00:20:05.481 { 00:20:05.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.481 "dma_device_type": 2 00:20:05.481 } 00:20:05.481 ], 00:20:05.481 "driver_specific": {} 00:20:05.481 }' 00:20:05.481 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:05.481 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:05.739 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:05.739 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:05.739 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:05.739 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.739 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:05.739 23:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:05.998 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.998 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:05.998 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:05.998 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:05.998 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:05.998 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:05.998 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:06.257 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:06.257 "name": "BaseBdev4", 00:20:06.257 "aliases": [ 00:20:06.257 "150f46fa-89a1-4d6a-9999-0e04dc6d33c1" 00:20:06.257 ], 00:20:06.257 "product_name": "Malloc disk", 00:20:06.257 "block_size": 512, 00:20:06.257 "num_blocks": 65536, 00:20:06.257 "uuid": "150f46fa-89a1-4d6a-9999-0e04dc6d33c1", 00:20:06.257 "assigned_rate_limits": { 00:20:06.257 "rw_ios_per_sec": 0, 00:20:06.257 "rw_mbytes_per_sec": 0, 00:20:06.257 "r_mbytes_per_sec": 0, 00:20:06.257 "w_mbytes_per_sec": 0 00:20:06.257 }, 00:20:06.257 "claimed": true, 00:20:06.257 "claim_type": "exclusive_write", 00:20:06.257 "zoned": false, 00:20:06.257 "supported_io_types": { 00:20:06.257 "read": true, 00:20:06.257 "write": true, 00:20:06.257 "unmap": true, 00:20:06.257 "write_zeroes": true, 00:20:06.257 "flush": true, 00:20:06.257 "reset": true, 00:20:06.257 "compare": false, 00:20:06.257 "compare_and_write": false, 00:20:06.257 "abort": true, 00:20:06.257 "nvme_admin": false, 00:20:06.257 "nvme_io": false 00:20:06.257 }, 00:20:06.257 "memory_domains": [ 00:20:06.257 { 00:20:06.257 "dma_device_id": "system", 00:20:06.257 "dma_device_type": 1 00:20:06.257 }, 00:20:06.257 { 00:20:06.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.257 "dma_device_type": 2 00:20:06.257 } 00:20:06.257 ], 00:20:06.257 "driver_specific": {} 00:20:06.257 }' 00:20:06.257 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:06.257 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:06.257 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:06.257 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:06.516 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:06.516 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:06.516 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:06.516 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:06.516 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:06.516 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:06.775 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:06.775 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:06.775 23:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:07.033 [2024-05-14 23:35:30.124601] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:07.033 [2024-05-14 23:35:30.124646] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.033 [2024-05-14 23:35:30.124709] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:07.033 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.034 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.034 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.034 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.034 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.034 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.292 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.292 "name": "Existed_Raid", 00:20:07.292 "uuid": "c8007e2a-d7b8-4663-b144-8d94690eee30", 00:20:07.292 "strip_size_kb": 64, 00:20:07.292 "state": "offline", 00:20:07.292 "raid_level": "concat", 00:20:07.292 "superblock": true, 00:20:07.292 "num_base_bdevs": 4, 00:20:07.292 "num_base_bdevs_discovered": 3, 00:20:07.292 "num_base_bdevs_operational": 3, 00:20:07.292 "base_bdevs_list": [ 00:20:07.292 { 00:20:07.292 "name": null, 00:20:07.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.292 "is_configured": false, 00:20:07.292 "data_offset": 2048, 00:20:07.292 "data_size": 63488 00:20:07.292 }, 00:20:07.292 { 00:20:07.292 "name": "BaseBdev2", 00:20:07.292 "uuid": "334151e2-095d-4262-8392-b6166aff61fe", 00:20:07.292 "is_configured": true, 00:20:07.292 "data_offset": 2048, 00:20:07.292 "data_size": 63488 00:20:07.292 }, 00:20:07.292 { 00:20:07.292 "name": "BaseBdev3", 00:20:07.292 "uuid": "99b480bf-a422-4192-92e9-6fa23a9e7acf", 00:20:07.292 "is_configured": true, 00:20:07.292 "data_offset": 2048, 00:20:07.292 "data_size": 63488 00:20:07.292 }, 00:20:07.292 { 00:20:07.292 "name": "BaseBdev4", 00:20:07.292 "uuid": "150f46fa-89a1-4d6a-9999-0e04dc6d33c1", 00:20:07.292 "is_configured": true, 00:20:07.292 "data_offset": 2048, 00:20:07.292 "data_size": 63488 00:20:07.292 } 00:20:07.292 ] 00:20:07.292 }' 00:20:07.292 23:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.292 23:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.227 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:08.227 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:08.227 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.227 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:08.227 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:08.227 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:08.227 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:08.486 [2024-05-14 23:35:31.654146] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:08.486 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:08.486 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:08.486 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:08.486 23:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.744 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:08.744 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:08.744 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:09.005 [2024-05-14 23:35:32.222971] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:09.265 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:09.265 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:09.265 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.265 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:09.524 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:09.524 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:09.524 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:09.524 [2024-05-14 23:35:32.741233] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:09.524 [2024-05-14 23:35:32.741313] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:20:09.784 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:09.784 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:09.784 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.784 23:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:10.041 BaseBdev2 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:10.041 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:10.042 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:10.042 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:10.310 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:10.569 [ 00:20:10.569 { 00:20:10.569 "name": "BaseBdev2", 00:20:10.569 "aliases": [ 00:20:10.569 "13c5ce5b-4665-4b8a-84ea-bb7011871001" 00:20:10.569 ], 00:20:10.569 "product_name": "Malloc disk", 00:20:10.569 "block_size": 512, 00:20:10.569 "num_blocks": 65536, 00:20:10.569 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:10.569 "assigned_rate_limits": { 00:20:10.569 "rw_ios_per_sec": 0, 00:20:10.569 "rw_mbytes_per_sec": 0, 00:20:10.569 "r_mbytes_per_sec": 0, 00:20:10.569 "w_mbytes_per_sec": 0 00:20:10.569 }, 00:20:10.569 "claimed": false, 00:20:10.569 "zoned": false, 00:20:10.569 "supported_io_types": { 00:20:10.569 "read": true, 00:20:10.569 "write": true, 00:20:10.569 "unmap": true, 00:20:10.569 "write_zeroes": true, 00:20:10.569 "flush": true, 00:20:10.569 "reset": true, 00:20:10.569 "compare": false, 00:20:10.569 "compare_and_write": false, 00:20:10.569 "abort": true, 00:20:10.569 "nvme_admin": false, 00:20:10.569 "nvme_io": false 00:20:10.569 }, 00:20:10.569 "memory_domains": [ 00:20:10.569 { 00:20:10.569 "dma_device_id": "system", 00:20:10.569 "dma_device_type": 1 00:20:10.569 }, 00:20:10.569 { 00:20:10.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.569 "dma_device_type": 2 00:20:10.569 } 00:20:10.569 ], 00:20:10.569 "driver_specific": {} 00:20:10.569 } 00:20:10.569 ] 00:20:10.569 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:10.569 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:10.569 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:10.569 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:10.828 BaseBdev3 00:20:10.828 23:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:20:10.828 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:10.828 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:10.828 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:10.828 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:10.828 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:10.828 23:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.099 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:11.359 [ 00:20:11.359 { 00:20:11.359 "name": "BaseBdev3", 00:20:11.359 "aliases": [ 00:20:11.359 "a182fcd4-3ffb-4530-a05d-e89874966152" 00:20:11.359 ], 00:20:11.359 "product_name": "Malloc disk", 00:20:11.359 "block_size": 512, 00:20:11.359 "num_blocks": 65536, 00:20:11.359 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:11.359 "assigned_rate_limits": { 00:20:11.359 "rw_ios_per_sec": 0, 00:20:11.359 "rw_mbytes_per_sec": 0, 00:20:11.359 "r_mbytes_per_sec": 0, 00:20:11.359 "w_mbytes_per_sec": 0 00:20:11.359 }, 00:20:11.359 "claimed": false, 00:20:11.359 "zoned": false, 00:20:11.359 "supported_io_types": { 00:20:11.359 "read": true, 00:20:11.359 "write": true, 00:20:11.359 "unmap": true, 00:20:11.359 "write_zeroes": true, 00:20:11.359 "flush": true, 00:20:11.359 "reset": true, 00:20:11.359 "compare": false, 00:20:11.359 "compare_and_write": false, 00:20:11.359 "abort": true, 00:20:11.359 "nvme_admin": false, 00:20:11.359 "nvme_io": false 00:20:11.359 }, 00:20:11.359 "memory_domains": [ 00:20:11.359 { 00:20:11.359 "dma_device_id": "system", 00:20:11.359 "dma_device_type": 1 00:20:11.359 }, 00:20:11.359 { 00:20:11.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.359 "dma_device_type": 2 00:20:11.359 } 00:20:11.359 ], 00:20:11.359 "driver_specific": {} 00:20:11.359 } 00:20:11.359 ] 00:20:11.359 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:11.359 23:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:11.359 23:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:11.359 23:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:11.617 BaseBdev4 00:20:11.617 23:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:20:11.617 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:20:11.617 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:11.617 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:11.617 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:11.617 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:11.618 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.618 23:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:11.876 [ 00:20:11.876 { 00:20:11.876 "name": "BaseBdev4", 00:20:11.876 "aliases": [ 00:20:11.876 "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d" 00:20:11.876 ], 00:20:11.876 "product_name": "Malloc disk", 00:20:11.876 "block_size": 512, 00:20:11.876 "num_blocks": 65536, 00:20:11.876 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:11.876 "assigned_rate_limits": { 00:20:11.876 "rw_ios_per_sec": 0, 00:20:11.876 "rw_mbytes_per_sec": 0, 00:20:11.876 "r_mbytes_per_sec": 0, 00:20:11.876 "w_mbytes_per_sec": 0 00:20:11.876 }, 00:20:11.876 "claimed": false, 00:20:11.876 "zoned": false, 00:20:11.876 "supported_io_types": { 00:20:11.876 "read": true, 00:20:11.876 "write": true, 00:20:11.876 "unmap": true, 00:20:11.876 "write_zeroes": true, 00:20:11.876 "flush": true, 00:20:11.876 "reset": true, 00:20:11.876 "compare": false, 00:20:11.876 "compare_and_write": false, 00:20:11.876 "abort": true, 00:20:11.876 "nvme_admin": false, 00:20:11.876 "nvme_io": false 00:20:11.876 }, 00:20:11.876 "memory_domains": [ 00:20:11.876 { 00:20:11.876 "dma_device_id": "system", 00:20:11.876 "dma_device_type": 1 00:20:11.876 }, 00:20:11.876 { 00:20:11.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.876 "dma_device_type": 2 00:20:11.876 } 00:20:11.876 ], 00:20:11.876 "driver_specific": {} 00:20:11.876 } 00:20:11.876 ] 00:20:11.876 23:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:11.876 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:11.876 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:11.876 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:12.134 [2024-05-14 23:35:35.354713] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:12.134 [2024-05-14 23:35:35.354827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:12.134 [2024-05-14 23:35:35.354862] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:12.134 [2024-05-14 23:35:35.357510] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.134 [2024-05-14 23:35:35.357632] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.134 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.135 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.135 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.135 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.393 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.393 "name": "Existed_Raid", 00:20:12.393 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:12.393 "strip_size_kb": 64, 00:20:12.393 "state": "configuring", 00:20:12.393 "raid_level": "concat", 00:20:12.393 "superblock": true, 00:20:12.393 "num_base_bdevs": 4, 00:20:12.393 "num_base_bdevs_discovered": 3, 00:20:12.393 "num_base_bdevs_operational": 4, 00:20:12.393 "base_bdevs_list": [ 00:20:12.393 { 00:20:12.393 "name": "BaseBdev1", 00:20:12.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.393 "is_configured": false, 00:20:12.393 "data_offset": 0, 00:20:12.393 "data_size": 0 00:20:12.393 }, 00:20:12.393 { 00:20:12.393 "name": "BaseBdev2", 00:20:12.394 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:12.394 "is_configured": true, 00:20:12.394 "data_offset": 2048, 00:20:12.394 "data_size": 63488 00:20:12.394 }, 00:20:12.394 { 00:20:12.394 "name": "BaseBdev3", 00:20:12.394 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:12.394 "is_configured": true, 00:20:12.394 "data_offset": 2048, 00:20:12.394 "data_size": 63488 00:20:12.394 }, 00:20:12.394 { 00:20:12.394 "name": "BaseBdev4", 00:20:12.394 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:12.394 "is_configured": true, 00:20:12.394 "data_offset": 2048, 00:20:12.394 "data_size": 63488 00:20:12.394 } 00:20:12.394 ] 00:20:12.394 }' 00:20:12.394 23:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.394 23:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:13.339 [2024-05-14 23:35:36.494878] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.339 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.340 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.340 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.340 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.598 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.598 "name": "Existed_Raid", 00:20:13.598 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:13.598 "strip_size_kb": 64, 00:20:13.598 "state": "configuring", 00:20:13.598 "raid_level": "concat", 00:20:13.598 "superblock": true, 00:20:13.598 "num_base_bdevs": 4, 00:20:13.598 "num_base_bdevs_discovered": 2, 00:20:13.598 "num_base_bdevs_operational": 4, 00:20:13.598 "base_bdevs_list": [ 00:20:13.598 { 00:20:13.598 "name": "BaseBdev1", 00:20:13.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.598 "is_configured": false, 00:20:13.598 "data_offset": 0, 00:20:13.598 "data_size": 0 00:20:13.598 }, 00:20:13.598 { 00:20:13.598 "name": null, 00:20:13.598 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:13.598 "is_configured": false, 00:20:13.598 "data_offset": 2048, 00:20:13.598 "data_size": 63488 00:20:13.598 }, 00:20:13.598 { 00:20:13.598 "name": "BaseBdev3", 00:20:13.598 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:13.599 "is_configured": true, 00:20:13.599 "data_offset": 2048, 00:20:13.599 "data_size": 63488 00:20:13.599 }, 00:20:13.599 { 00:20:13.599 "name": "BaseBdev4", 00:20:13.599 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:13.599 "is_configured": true, 00:20:13.599 "data_offset": 2048, 00:20:13.599 "data_size": 63488 00:20:13.599 } 00:20:13.599 ] 00:20:13.599 }' 00:20:13.599 23:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.599 23:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.166 23:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.166 23:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:14.425 23:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:20:14.425 23:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:14.683 [2024-05-14 23:35:37.804697] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:14.683 BaseBdev1 00:20:14.683 23:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:20:14.683 23:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:14.683 23:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:14.683 23:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:14.683 23:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:14.683 23:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:14.683 23:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.941 23:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:15.199 [ 00:20:15.199 { 00:20:15.199 "name": "BaseBdev1", 00:20:15.199 "aliases": [ 00:20:15.199 "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca" 00:20:15.199 ], 00:20:15.199 "product_name": "Malloc disk", 00:20:15.199 "block_size": 512, 00:20:15.199 "num_blocks": 65536, 00:20:15.199 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:15.199 "assigned_rate_limits": { 00:20:15.199 "rw_ios_per_sec": 0, 00:20:15.199 "rw_mbytes_per_sec": 0, 00:20:15.199 "r_mbytes_per_sec": 0, 00:20:15.199 "w_mbytes_per_sec": 0 00:20:15.199 }, 00:20:15.199 "claimed": true, 00:20:15.199 "claim_type": "exclusive_write", 00:20:15.199 "zoned": false, 00:20:15.199 "supported_io_types": { 00:20:15.199 "read": true, 00:20:15.199 "write": true, 00:20:15.199 "unmap": true, 00:20:15.199 "write_zeroes": true, 00:20:15.199 "flush": true, 00:20:15.199 "reset": true, 00:20:15.199 "compare": false, 00:20:15.199 "compare_and_write": false, 00:20:15.199 "abort": true, 00:20:15.199 "nvme_admin": false, 00:20:15.199 "nvme_io": false 00:20:15.199 }, 00:20:15.199 "memory_domains": [ 00:20:15.199 { 00:20:15.199 "dma_device_id": "system", 00:20:15.199 "dma_device_type": 1 00:20:15.199 }, 00:20:15.199 { 00:20:15.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.199 "dma_device_type": 2 00:20:15.199 } 00:20:15.199 ], 00:20:15.199 "driver_specific": {} 00:20:15.199 } 00:20:15.199 ] 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.199 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.458 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.458 "name": "Existed_Raid", 00:20:15.458 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:15.458 "strip_size_kb": 64, 00:20:15.458 "state": "configuring", 00:20:15.458 "raid_level": "concat", 00:20:15.458 "superblock": true, 00:20:15.458 "num_base_bdevs": 4, 00:20:15.458 "num_base_bdevs_discovered": 3, 00:20:15.458 "num_base_bdevs_operational": 4, 00:20:15.458 "base_bdevs_list": [ 00:20:15.458 { 00:20:15.458 "name": "BaseBdev1", 00:20:15.458 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:15.458 "is_configured": true, 00:20:15.458 "data_offset": 2048, 00:20:15.458 "data_size": 63488 00:20:15.458 }, 00:20:15.458 { 00:20:15.458 "name": null, 00:20:15.458 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:15.458 "is_configured": false, 00:20:15.458 "data_offset": 2048, 00:20:15.458 "data_size": 63488 00:20:15.458 }, 00:20:15.458 { 00:20:15.458 "name": "BaseBdev3", 00:20:15.458 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:15.458 "is_configured": true, 00:20:15.458 "data_offset": 2048, 00:20:15.458 "data_size": 63488 00:20:15.458 }, 00:20:15.458 { 00:20:15.458 "name": "BaseBdev4", 00:20:15.458 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:15.458 "is_configured": true, 00:20:15.458 "data_offset": 2048, 00:20:15.458 "data_size": 63488 00:20:15.458 } 00:20:15.458 ] 00:20:15.458 }' 00:20:15.458 23:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.458 23:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.025 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.025 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:16.284 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:16.284 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:16.543 [2024-05-14 23:35:39.709093] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:16.543 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:16.543 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:16.543 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:16.543 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:16.543 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:16.543 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:16.543 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.544 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.544 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.544 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.544 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.544 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.801 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.801 "name": "Existed_Raid", 00:20:16.801 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:16.801 "strip_size_kb": 64, 00:20:16.801 "state": "configuring", 00:20:16.801 "raid_level": "concat", 00:20:16.801 "superblock": true, 00:20:16.801 "num_base_bdevs": 4, 00:20:16.801 "num_base_bdevs_discovered": 2, 00:20:16.801 "num_base_bdevs_operational": 4, 00:20:16.801 "base_bdevs_list": [ 00:20:16.801 { 00:20:16.801 "name": "BaseBdev1", 00:20:16.801 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:16.801 "is_configured": true, 00:20:16.801 "data_offset": 2048, 00:20:16.801 "data_size": 63488 00:20:16.801 }, 00:20:16.801 { 00:20:16.801 "name": null, 00:20:16.801 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:16.801 "is_configured": false, 00:20:16.801 "data_offset": 2048, 00:20:16.801 "data_size": 63488 00:20:16.801 }, 00:20:16.801 { 00:20:16.801 "name": null, 00:20:16.801 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:16.801 "is_configured": false, 00:20:16.801 "data_offset": 2048, 00:20:16.801 "data_size": 63488 00:20:16.801 }, 00:20:16.801 { 00:20:16.801 "name": "BaseBdev4", 00:20:16.801 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:16.801 "is_configured": true, 00:20:16.801 "data_offset": 2048, 00:20:16.801 "data_size": 63488 00:20:16.801 } 00:20:16.801 ] 00:20:16.801 }' 00:20:16.801 23:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.801 23:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.368 23:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.368 23:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:17.626 23:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:20:17.626 23:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:17.884 [2024-05-14 23:35:41.033519] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.884 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.143 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.143 "name": "Existed_Raid", 00:20:18.143 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:18.143 "strip_size_kb": 64, 00:20:18.143 "state": "configuring", 00:20:18.143 "raid_level": "concat", 00:20:18.143 "superblock": true, 00:20:18.143 "num_base_bdevs": 4, 00:20:18.143 "num_base_bdevs_discovered": 3, 00:20:18.143 "num_base_bdevs_operational": 4, 00:20:18.143 "base_bdevs_list": [ 00:20:18.143 { 00:20:18.143 "name": "BaseBdev1", 00:20:18.143 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:18.143 "is_configured": true, 00:20:18.143 "data_offset": 2048, 00:20:18.143 "data_size": 63488 00:20:18.143 }, 00:20:18.143 { 00:20:18.143 "name": null, 00:20:18.143 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:18.143 "is_configured": false, 00:20:18.143 "data_offset": 2048, 00:20:18.143 "data_size": 63488 00:20:18.143 }, 00:20:18.143 { 00:20:18.143 "name": "BaseBdev3", 00:20:18.143 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:18.143 "is_configured": true, 00:20:18.143 "data_offset": 2048, 00:20:18.143 "data_size": 63488 00:20:18.143 }, 00:20:18.143 { 00:20:18.143 "name": "BaseBdev4", 00:20:18.143 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:18.143 "is_configured": true, 00:20:18.143 "data_offset": 2048, 00:20:18.143 "data_size": 63488 00:20:18.143 } 00:20:18.143 ] 00:20:18.143 }' 00:20:18.143 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.143 23:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.710 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.710 23:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:18.969 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:20:18.969 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:18.969 [2024-05-14 23:35:42.237713] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.228 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.487 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:19.487 "name": "Existed_Raid", 00:20:19.487 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:19.487 "strip_size_kb": 64, 00:20:19.487 "state": "configuring", 00:20:19.487 "raid_level": "concat", 00:20:19.487 "superblock": true, 00:20:19.487 "num_base_bdevs": 4, 00:20:19.487 "num_base_bdevs_discovered": 2, 00:20:19.488 "num_base_bdevs_operational": 4, 00:20:19.488 "base_bdevs_list": [ 00:20:19.488 { 00:20:19.488 "name": null, 00:20:19.488 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:19.488 "is_configured": false, 00:20:19.488 "data_offset": 2048, 00:20:19.488 "data_size": 63488 00:20:19.488 }, 00:20:19.488 { 00:20:19.488 "name": null, 00:20:19.488 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:19.488 "is_configured": false, 00:20:19.488 "data_offset": 2048, 00:20:19.488 "data_size": 63488 00:20:19.488 }, 00:20:19.488 { 00:20:19.488 "name": "BaseBdev3", 00:20:19.488 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:19.488 "is_configured": true, 00:20:19.488 "data_offset": 2048, 00:20:19.488 "data_size": 63488 00:20:19.488 }, 00:20:19.488 { 00:20:19.488 "name": "BaseBdev4", 00:20:19.488 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:19.488 "is_configured": true, 00:20:19.488 "data_offset": 2048, 00:20:19.488 "data_size": 63488 00:20:19.488 } 00:20:19.488 ] 00:20:19.488 }' 00:20:19.488 23:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:19.488 23:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.056 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.056 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:20.315 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:20:20.315 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:20.574 [2024-05-14 23:35:43.607506] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.574 "name": "Existed_Raid", 00:20:20.574 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:20.574 "strip_size_kb": 64, 00:20:20.574 "state": "configuring", 00:20:20.574 "raid_level": "concat", 00:20:20.574 "superblock": true, 00:20:20.574 "num_base_bdevs": 4, 00:20:20.574 "num_base_bdevs_discovered": 3, 00:20:20.574 "num_base_bdevs_operational": 4, 00:20:20.574 "base_bdevs_list": [ 00:20:20.574 { 00:20:20.574 "name": null, 00:20:20.574 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:20.574 "is_configured": false, 00:20:20.574 "data_offset": 2048, 00:20:20.574 "data_size": 63488 00:20:20.574 }, 00:20:20.574 { 00:20:20.574 "name": "BaseBdev2", 00:20:20.574 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:20.574 "is_configured": true, 00:20:20.574 "data_offset": 2048, 00:20:20.574 "data_size": 63488 00:20:20.574 }, 00:20:20.574 { 00:20:20.574 "name": "BaseBdev3", 00:20:20.574 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:20.574 "is_configured": true, 00:20:20.574 "data_offset": 2048, 00:20:20.574 "data_size": 63488 00:20:20.574 }, 00:20:20.574 { 00:20:20.574 "name": "BaseBdev4", 00:20:20.574 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:20.574 "is_configured": true, 00:20:20.574 "data_offset": 2048, 00:20:20.574 "data_size": 63488 00:20:20.574 } 00:20:20.574 ] 00:20:20.574 }' 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.574 23:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.511 23:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.511 23:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:21.511 23:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:20:21.511 23:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.511 23:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:21.770 23:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca 00:20:22.028 NewBaseBdev 00:20:22.029 [2024-05-14 23:35:45.197431] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:22.029 [2024-05-14 23:35:45.197606] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:20:22.029 [2024-05-14 23:35:45.197632] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:22.029 [2024-05-14 23:35:45.197721] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:22.029 [2024-05-14 23:35:45.197940] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:20:22.029 [2024-05-14 23:35:45.197957] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:20:22.029 [2024-05-14 23:35:45.198052] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.029 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:20:22.029 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:22.029 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:22.029 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:22.029 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:22.029 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:22.029 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:22.288 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:22.546 [ 00:20:22.546 { 00:20:22.547 "name": "NewBaseBdev", 00:20:22.547 "aliases": [ 00:20:22.547 "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca" 00:20:22.547 ], 00:20:22.547 "product_name": "Malloc disk", 00:20:22.547 "block_size": 512, 00:20:22.547 "num_blocks": 65536, 00:20:22.547 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:22.547 "assigned_rate_limits": { 00:20:22.547 "rw_ios_per_sec": 0, 00:20:22.547 "rw_mbytes_per_sec": 0, 00:20:22.547 "r_mbytes_per_sec": 0, 00:20:22.547 "w_mbytes_per_sec": 0 00:20:22.547 }, 00:20:22.547 "claimed": true, 00:20:22.547 "claim_type": "exclusive_write", 00:20:22.547 "zoned": false, 00:20:22.547 "supported_io_types": { 00:20:22.547 "read": true, 00:20:22.547 "write": true, 00:20:22.547 "unmap": true, 00:20:22.547 "write_zeroes": true, 00:20:22.547 "flush": true, 00:20:22.547 "reset": true, 00:20:22.547 "compare": false, 00:20:22.547 "compare_and_write": false, 00:20:22.547 "abort": true, 00:20:22.547 "nvme_admin": false, 00:20:22.547 "nvme_io": false 00:20:22.547 }, 00:20:22.547 "memory_domains": [ 00:20:22.547 { 00:20:22.547 "dma_device_id": "system", 00:20:22.547 "dma_device_type": 1 00:20:22.547 }, 00:20:22.547 { 00:20:22.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.547 "dma_device_type": 2 00:20:22.547 } 00:20:22.547 ], 00:20:22.547 "driver_specific": {} 00:20:22.547 } 00:20:22.547 ] 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.547 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.806 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.806 "name": "Existed_Raid", 00:20:22.806 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:22.806 "strip_size_kb": 64, 00:20:22.806 "state": "online", 00:20:22.806 "raid_level": "concat", 00:20:22.806 "superblock": true, 00:20:22.806 "num_base_bdevs": 4, 00:20:22.806 "num_base_bdevs_discovered": 4, 00:20:22.806 "num_base_bdevs_operational": 4, 00:20:22.806 "base_bdevs_list": [ 00:20:22.806 { 00:20:22.806 "name": "NewBaseBdev", 00:20:22.806 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:22.806 "is_configured": true, 00:20:22.806 "data_offset": 2048, 00:20:22.806 "data_size": 63488 00:20:22.806 }, 00:20:22.806 { 00:20:22.806 "name": "BaseBdev2", 00:20:22.806 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:22.806 "is_configured": true, 00:20:22.806 "data_offset": 2048, 00:20:22.806 "data_size": 63488 00:20:22.806 }, 00:20:22.806 { 00:20:22.806 "name": "BaseBdev3", 00:20:22.806 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:22.806 "is_configured": true, 00:20:22.806 "data_offset": 2048, 00:20:22.806 "data_size": 63488 00:20:22.806 }, 00:20:22.806 { 00:20:22.806 "name": "BaseBdev4", 00:20:22.806 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:22.806 "is_configured": true, 00:20:22.806 "data_offset": 2048, 00:20:22.806 "data_size": 63488 00:20:22.806 } 00:20:22.806 ] 00:20:22.806 }' 00:20:22.806 23:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.806 23:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.373 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:20:23.373 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:23.373 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:23.373 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:23.373 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:23.373 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:20:23.373 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:23.373 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:23.633 [2024-05-14 23:35:46.726040] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.633 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:23.633 "name": "Existed_Raid", 00:20:23.633 "aliases": [ 00:20:23.633 "538054f8-9468-443d-93c2-0674341fdd81" 00:20:23.633 ], 00:20:23.633 "product_name": "Raid Volume", 00:20:23.633 "block_size": 512, 00:20:23.633 "num_blocks": 253952, 00:20:23.633 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:23.633 "assigned_rate_limits": { 00:20:23.633 "rw_ios_per_sec": 0, 00:20:23.633 "rw_mbytes_per_sec": 0, 00:20:23.633 "r_mbytes_per_sec": 0, 00:20:23.633 "w_mbytes_per_sec": 0 00:20:23.633 }, 00:20:23.633 "claimed": false, 00:20:23.633 "zoned": false, 00:20:23.633 "supported_io_types": { 00:20:23.633 "read": true, 00:20:23.633 "write": true, 00:20:23.633 "unmap": true, 00:20:23.633 "write_zeroes": true, 00:20:23.633 "flush": true, 00:20:23.633 "reset": true, 00:20:23.633 "compare": false, 00:20:23.633 "compare_and_write": false, 00:20:23.633 "abort": false, 00:20:23.633 "nvme_admin": false, 00:20:23.633 "nvme_io": false 00:20:23.633 }, 00:20:23.633 "memory_domains": [ 00:20:23.633 { 00:20:23.633 "dma_device_id": "system", 00:20:23.633 "dma_device_type": 1 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.633 "dma_device_type": 2 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "dma_device_id": "system", 00:20:23.633 "dma_device_type": 1 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.633 "dma_device_type": 2 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "dma_device_id": "system", 00:20:23.633 "dma_device_type": 1 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.633 "dma_device_type": 2 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "dma_device_id": "system", 00:20:23.633 "dma_device_type": 1 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.633 "dma_device_type": 2 00:20:23.633 } 00:20:23.633 ], 00:20:23.633 "driver_specific": { 00:20:23.633 "raid": { 00:20:23.633 "uuid": "538054f8-9468-443d-93c2-0674341fdd81", 00:20:23.633 "strip_size_kb": 64, 00:20:23.633 "state": "online", 00:20:23.633 "raid_level": "concat", 00:20:23.633 "superblock": true, 00:20:23.633 "num_base_bdevs": 4, 00:20:23.633 "num_base_bdevs_discovered": 4, 00:20:23.633 "num_base_bdevs_operational": 4, 00:20:23.633 "base_bdevs_list": [ 00:20:23.633 { 00:20:23.633 "name": "NewBaseBdev", 00:20:23.633 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:23.633 "is_configured": true, 00:20:23.633 "data_offset": 2048, 00:20:23.633 "data_size": 63488 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "name": "BaseBdev2", 00:20:23.633 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:23.633 "is_configured": true, 00:20:23.633 "data_offset": 2048, 00:20:23.633 "data_size": 63488 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "name": "BaseBdev3", 00:20:23.633 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:23.633 "is_configured": true, 00:20:23.633 "data_offset": 2048, 00:20:23.633 "data_size": 63488 00:20:23.633 }, 00:20:23.633 { 00:20:23.633 "name": "BaseBdev4", 00:20:23.633 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:23.633 "is_configured": true, 00:20:23.633 "data_offset": 2048, 00:20:23.633 "data_size": 63488 00:20:23.633 } 00:20:23.633 ] 00:20:23.633 } 00:20:23.633 } 00:20:23.633 }' 00:20:23.633 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:23.633 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:20:23.633 BaseBdev2 00:20:23.633 BaseBdev3 00:20:23.633 BaseBdev4' 00:20:23.633 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:23.633 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:23.633 23:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:23.892 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:23.892 "name": "NewBaseBdev", 00:20:23.892 "aliases": [ 00:20:23.892 "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca" 00:20:23.892 ], 00:20:23.892 "product_name": "Malloc disk", 00:20:23.892 "block_size": 512, 00:20:23.892 "num_blocks": 65536, 00:20:23.892 "uuid": "7f1f1dea-35ff-40a5-be8b-eac2d1cf95ca", 00:20:23.892 "assigned_rate_limits": { 00:20:23.892 "rw_ios_per_sec": 0, 00:20:23.892 "rw_mbytes_per_sec": 0, 00:20:23.892 "r_mbytes_per_sec": 0, 00:20:23.892 "w_mbytes_per_sec": 0 00:20:23.892 }, 00:20:23.892 "claimed": true, 00:20:23.892 "claim_type": "exclusive_write", 00:20:23.892 "zoned": false, 00:20:23.892 "supported_io_types": { 00:20:23.892 "read": true, 00:20:23.892 "write": true, 00:20:23.892 "unmap": true, 00:20:23.892 "write_zeroes": true, 00:20:23.892 "flush": true, 00:20:23.892 "reset": true, 00:20:23.892 "compare": false, 00:20:23.892 "compare_and_write": false, 00:20:23.892 "abort": true, 00:20:23.892 "nvme_admin": false, 00:20:23.892 "nvme_io": false 00:20:23.892 }, 00:20:23.892 "memory_domains": [ 00:20:23.892 { 00:20:23.892 "dma_device_id": "system", 00:20:23.892 "dma_device_type": 1 00:20:23.892 }, 00:20:23.892 { 00:20:23.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.892 "dma_device_type": 2 00:20:23.892 } 00:20:23.892 ], 00:20:23.892 "driver_specific": {} 00:20:23.892 }' 00:20:23.892 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:23.892 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:23.892 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:23.892 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:24.151 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:24.151 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:24.151 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:24.151 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:24.151 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.151 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:24.410 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:24.410 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:24.410 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:24.410 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:24.410 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:24.668 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:24.668 "name": "BaseBdev2", 00:20:24.668 "aliases": [ 00:20:24.668 "13c5ce5b-4665-4b8a-84ea-bb7011871001" 00:20:24.668 ], 00:20:24.668 "product_name": "Malloc disk", 00:20:24.668 "block_size": 512, 00:20:24.669 "num_blocks": 65536, 00:20:24.669 "uuid": "13c5ce5b-4665-4b8a-84ea-bb7011871001", 00:20:24.669 "assigned_rate_limits": { 00:20:24.669 "rw_ios_per_sec": 0, 00:20:24.669 "rw_mbytes_per_sec": 0, 00:20:24.669 "r_mbytes_per_sec": 0, 00:20:24.669 "w_mbytes_per_sec": 0 00:20:24.669 }, 00:20:24.669 "claimed": true, 00:20:24.669 "claim_type": "exclusive_write", 00:20:24.669 "zoned": false, 00:20:24.669 "supported_io_types": { 00:20:24.669 "read": true, 00:20:24.669 "write": true, 00:20:24.669 "unmap": true, 00:20:24.669 "write_zeroes": true, 00:20:24.669 "flush": true, 00:20:24.669 "reset": true, 00:20:24.669 "compare": false, 00:20:24.669 "compare_and_write": false, 00:20:24.669 "abort": true, 00:20:24.669 "nvme_admin": false, 00:20:24.669 "nvme_io": false 00:20:24.669 }, 00:20:24.669 "memory_domains": [ 00:20:24.669 { 00:20:24.669 "dma_device_id": "system", 00:20:24.669 "dma_device_type": 1 00:20:24.669 }, 00:20:24.669 { 00:20:24.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.669 "dma_device_type": 2 00:20:24.669 } 00:20:24.669 ], 00:20:24.669 "driver_specific": {} 00:20:24.669 }' 00:20:24.669 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:24.669 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:24.669 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:24.669 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:24.669 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:24.927 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:24.927 23:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:24.927 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:24.927 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.927 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:24.927 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:25.186 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:25.186 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:25.186 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:25.186 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:25.186 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:25.186 "name": "BaseBdev3", 00:20:25.186 "aliases": [ 00:20:25.186 "a182fcd4-3ffb-4530-a05d-e89874966152" 00:20:25.186 ], 00:20:25.186 "product_name": "Malloc disk", 00:20:25.186 "block_size": 512, 00:20:25.186 "num_blocks": 65536, 00:20:25.186 "uuid": "a182fcd4-3ffb-4530-a05d-e89874966152", 00:20:25.186 "assigned_rate_limits": { 00:20:25.186 "rw_ios_per_sec": 0, 00:20:25.186 "rw_mbytes_per_sec": 0, 00:20:25.186 "r_mbytes_per_sec": 0, 00:20:25.186 "w_mbytes_per_sec": 0 00:20:25.186 }, 00:20:25.186 "claimed": true, 00:20:25.186 "claim_type": "exclusive_write", 00:20:25.186 "zoned": false, 00:20:25.186 "supported_io_types": { 00:20:25.186 "read": true, 00:20:25.186 "write": true, 00:20:25.186 "unmap": true, 00:20:25.186 "write_zeroes": true, 00:20:25.186 "flush": true, 00:20:25.186 "reset": true, 00:20:25.186 "compare": false, 00:20:25.186 "compare_and_write": false, 00:20:25.186 "abort": true, 00:20:25.186 "nvme_admin": false, 00:20:25.186 "nvme_io": false 00:20:25.186 }, 00:20:25.186 "memory_domains": [ 00:20:25.186 { 00:20:25.186 "dma_device_id": "system", 00:20:25.186 "dma_device_type": 1 00:20:25.186 }, 00:20:25.186 { 00:20:25.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.186 "dma_device_type": 2 00:20:25.186 } 00:20:25.186 ], 00:20:25.186 "driver_specific": {} 00:20:25.186 }' 00:20:25.186 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:25.446 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:25.446 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:25.446 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:25.446 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:25.446 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:25.446 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:25.705 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:25.705 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:25.705 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:25.705 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:25.705 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:25.705 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:25.705 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:25.705 23:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:25.964 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:25.964 "name": "BaseBdev4", 00:20:25.964 "aliases": [ 00:20:25.964 "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d" 00:20:25.964 ], 00:20:25.964 "product_name": "Malloc disk", 00:20:25.964 "block_size": 512, 00:20:25.964 "num_blocks": 65536, 00:20:25.964 "uuid": "7a7ad57f-5a6c-49e4-9f3c-21194d7f6b6d", 00:20:25.964 "assigned_rate_limits": { 00:20:25.964 "rw_ios_per_sec": 0, 00:20:25.964 "rw_mbytes_per_sec": 0, 00:20:25.964 "r_mbytes_per_sec": 0, 00:20:25.964 "w_mbytes_per_sec": 0 00:20:25.964 }, 00:20:25.964 "claimed": true, 00:20:25.964 "claim_type": "exclusive_write", 00:20:25.964 "zoned": false, 00:20:25.964 "supported_io_types": { 00:20:25.964 "read": true, 00:20:25.964 "write": true, 00:20:25.964 "unmap": true, 00:20:25.964 "write_zeroes": true, 00:20:25.964 "flush": true, 00:20:25.964 "reset": true, 00:20:25.964 "compare": false, 00:20:25.964 "compare_and_write": false, 00:20:25.964 "abort": true, 00:20:25.964 "nvme_admin": false, 00:20:25.964 "nvme_io": false 00:20:25.964 }, 00:20:25.964 "memory_domains": [ 00:20:25.964 { 00:20:25.964 "dma_device_id": "system", 00:20:25.964 "dma_device_type": 1 00:20:25.964 }, 00:20:25.964 { 00:20:25.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.964 "dma_device_type": 2 00:20:25.964 } 00:20:25.964 ], 00:20:25.965 "driver_specific": {} 00:20:25.965 }' 00:20:25.965 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:25.965 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:26.223 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:26.223 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:26.223 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:26.223 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:26.223 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:26.223 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:26.223 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:26.224 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:26.482 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:26.482 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:26.482 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:26.742 [2024-05-14 23:35:49.794247] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:26.742 [2024-05-14 23:35:49.794287] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.742 [2024-05-14 23:35:49.794353] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.742 [2024-05-14 23:35:49.794400] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.742 [2024-05-14 23:35:49.794411] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 68109 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 68109 ']' 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 68109 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68109 00:20:26.742 killing process with pid 68109 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68109' 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 68109 00:20:26.742 23:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 68109 00:20:26.742 [2024-05-14 23:35:49.836268] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.001 [2024-05-14 23:35:50.151745] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:28.380 23:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:20:28.380 00:20:28.380 real 0m34.840s 00:20:28.380 user 1m5.741s 00:20:28.380 sys 0m3.405s 00:20:28.380 23:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:28.380 23:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.380 ************************************ 00:20:28.380 END TEST raid_state_function_test_sb 00:20:28.380 ************************************ 00:20:28.380 23:35:51 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:20:28.380 23:35:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:20:28.380 23:35:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:28.380 23:35:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.380 ************************************ 00:20:28.380 START TEST raid_superblock_test 00:20:28.380 ************************************ 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:28.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69227 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69227 /var/tmp/spdk-raid.sock 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 69227 ']' 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:28.380 23:35:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.380 [2024-05-14 23:35:51.597279] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:20:28.380 [2024-05-14 23:35:51.597489] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69227 ] 00:20:28.639 [2024-05-14 23:35:51.769875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.898 [2024-05-14 23:35:52.028064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.184 [2024-05-14 23:35:52.228775] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.184 23:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:29.184 23:35:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:20:29.184 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:29.184 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:29.185 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:29.185 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:29.185 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:29.185 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:29.185 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:29.185 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:29.185 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:29.444 malloc1 00:20:29.444 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:29.703 [2024-05-14 23:35:52.857992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:29.703 [2024-05-14 23:35:52.858100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.703 [2024-05-14 23:35:52.858337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:20:29.703 [2024-05-14 23:35:52.858399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.703 [2024-05-14 23:35:52.860147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.703 [2024-05-14 23:35:52.860196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:29.703 pt1 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:29.703 23:35:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:29.962 malloc2 00:20:29.962 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:30.221 [2024-05-14 23:35:53.380147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:30.221 [2024-05-14 23:35:53.380297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.221 [2024-05-14 23:35:53.380347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:20:30.221 [2024-05-14 23:35:53.380387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.221 [2024-05-14 23:35:53.382223] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.221 [2024-05-14 23:35:53.382277] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:30.221 pt2 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:30.221 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:30.480 malloc3 00:20:30.480 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:30.739 [2024-05-14 23:35:53.850269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:30.739 [2024-05-14 23:35:53.850359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.740 [2024-05-14 23:35:53.850410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:20:30.740 [2024-05-14 23:35:53.850455] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.740 [2024-05-14 23:35:53.852267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.740 [2024-05-14 23:35:53.852325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:30.740 pt3 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:30.740 23:35:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:30.999 malloc4 00:20:30.999 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:30.999 [2024-05-14 23:35:54.269135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:30.999 [2024-05-14 23:35:54.269507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.999 [2024-05-14 23:35:54.269570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:20:30.999 [2024-05-14 23:35:54.269624] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.999 [2024-05-14 23:35:54.271391] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.999 [2024-05-14 23:35:54.271446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:30.999 pt4 00:20:30.999 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:30.999 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:30.999 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:31.258 [2024-05-14 23:35:54.457248] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:31.258 [2024-05-14 23:35:54.459407] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:31.258 [2024-05-14 23:35:54.459557] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:31.258 [2024-05-14 23:35:54.459698] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:31.258 [2024-05-14 23:35:54.460001] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:20:31.258 [2024-05-14 23:35:54.460031] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:31.258 [2024-05-14 23:35:54.460321] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:31.258 [2024-05-14 23:35:54.460831] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:20:31.258 [2024-05-14 23:35:54.460865] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:20:31.258 [2024-05-14 23:35:54.461267] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.258 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.517 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:31.517 "name": "raid_bdev1", 00:20:31.517 "uuid": "09d437eb-08e6-4e8b-8b7f-351e4753303d", 00:20:31.517 "strip_size_kb": 64, 00:20:31.517 "state": "online", 00:20:31.517 "raid_level": "concat", 00:20:31.517 "superblock": true, 00:20:31.517 "num_base_bdevs": 4, 00:20:31.517 "num_base_bdevs_discovered": 4, 00:20:31.517 "num_base_bdevs_operational": 4, 00:20:31.517 "base_bdevs_list": [ 00:20:31.517 { 00:20:31.517 "name": "pt1", 00:20:31.517 "uuid": "44b294cb-e625-51f7-b057-809bc0e2aacb", 00:20:31.517 "is_configured": true, 00:20:31.517 "data_offset": 2048, 00:20:31.517 "data_size": 63488 00:20:31.517 }, 00:20:31.517 { 00:20:31.517 "name": "pt2", 00:20:31.517 "uuid": "0ccd48b0-db2c-598d-80ce-c801cabc654b", 00:20:31.517 "is_configured": true, 00:20:31.517 "data_offset": 2048, 00:20:31.517 "data_size": 63488 00:20:31.517 }, 00:20:31.517 { 00:20:31.517 "name": "pt3", 00:20:31.517 "uuid": "5c35c5aa-d5d9-5b10-beb0-4936c4a32833", 00:20:31.517 "is_configured": true, 00:20:31.517 "data_offset": 2048, 00:20:31.517 "data_size": 63488 00:20:31.517 }, 00:20:31.517 { 00:20:31.517 "name": "pt4", 00:20:31.517 "uuid": "164a9111-f9ad-58ab-a3b5-6fd7a04f6363", 00:20:31.517 "is_configured": true, 00:20:31.517 "data_offset": 2048, 00:20:31.517 "data_size": 63488 00:20:31.517 } 00:20:31.517 ] 00:20:31.517 }' 00:20:31.517 23:35:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:31.517 23:35:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.085 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:32.085 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:20:32.085 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:32.085 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:32.085 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:32.085 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:32.085 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:32.085 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:32.345 [2024-05-14 23:35:55.525759] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:32.345 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:32.345 "name": "raid_bdev1", 00:20:32.345 "aliases": [ 00:20:32.345 "09d437eb-08e6-4e8b-8b7f-351e4753303d" 00:20:32.345 ], 00:20:32.345 "product_name": "Raid Volume", 00:20:32.345 "block_size": 512, 00:20:32.345 "num_blocks": 253952, 00:20:32.345 "uuid": "09d437eb-08e6-4e8b-8b7f-351e4753303d", 00:20:32.345 "assigned_rate_limits": { 00:20:32.345 "rw_ios_per_sec": 0, 00:20:32.345 "rw_mbytes_per_sec": 0, 00:20:32.345 "r_mbytes_per_sec": 0, 00:20:32.345 "w_mbytes_per_sec": 0 00:20:32.345 }, 00:20:32.345 "claimed": false, 00:20:32.345 "zoned": false, 00:20:32.345 "supported_io_types": { 00:20:32.345 "read": true, 00:20:32.345 "write": true, 00:20:32.345 "unmap": true, 00:20:32.345 "write_zeroes": true, 00:20:32.345 "flush": true, 00:20:32.345 "reset": true, 00:20:32.345 "compare": false, 00:20:32.345 "compare_and_write": false, 00:20:32.345 "abort": false, 00:20:32.345 "nvme_admin": false, 00:20:32.345 "nvme_io": false 00:20:32.345 }, 00:20:32.345 "memory_domains": [ 00:20:32.345 { 00:20:32.345 "dma_device_id": "system", 00:20:32.345 "dma_device_type": 1 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.345 "dma_device_type": 2 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "dma_device_id": "system", 00:20:32.345 "dma_device_type": 1 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.345 "dma_device_type": 2 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "dma_device_id": "system", 00:20:32.345 "dma_device_type": 1 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.345 "dma_device_type": 2 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "dma_device_id": "system", 00:20:32.345 "dma_device_type": 1 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.345 "dma_device_type": 2 00:20:32.345 } 00:20:32.345 ], 00:20:32.345 "driver_specific": { 00:20:32.345 "raid": { 00:20:32.345 "uuid": "09d437eb-08e6-4e8b-8b7f-351e4753303d", 00:20:32.345 "strip_size_kb": 64, 00:20:32.345 "state": "online", 00:20:32.345 "raid_level": "concat", 00:20:32.345 "superblock": true, 00:20:32.345 "num_base_bdevs": 4, 00:20:32.345 "num_base_bdevs_discovered": 4, 00:20:32.345 "num_base_bdevs_operational": 4, 00:20:32.345 "base_bdevs_list": [ 00:20:32.345 { 00:20:32.345 "name": "pt1", 00:20:32.345 "uuid": "44b294cb-e625-51f7-b057-809bc0e2aacb", 00:20:32.345 "is_configured": true, 00:20:32.345 "data_offset": 2048, 00:20:32.345 "data_size": 63488 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "name": "pt2", 00:20:32.345 "uuid": "0ccd48b0-db2c-598d-80ce-c801cabc654b", 00:20:32.345 "is_configured": true, 00:20:32.345 "data_offset": 2048, 00:20:32.345 "data_size": 63488 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "name": "pt3", 00:20:32.345 "uuid": "5c35c5aa-d5d9-5b10-beb0-4936c4a32833", 00:20:32.345 "is_configured": true, 00:20:32.345 "data_offset": 2048, 00:20:32.345 "data_size": 63488 00:20:32.345 }, 00:20:32.345 { 00:20:32.345 "name": "pt4", 00:20:32.345 "uuid": "164a9111-f9ad-58ab-a3b5-6fd7a04f6363", 00:20:32.345 "is_configured": true, 00:20:32.345 "data_offset": 2048, 00:20:32.345 "data_size": 63488 00:20:32.345 } 00:20:32.345 ] 00:20:32.345 } 00:20:32.345 } 00:20:32.345 }' 00:20:32.345 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:32.345 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:20:32.345 pt2 00:20:32.345 pt3 00:20:32.345 pt4' 00:20:32.345 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:32.345 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:32.345 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:32.633 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:32.633 "name": "pt1", 00:20:32.633 "aliases": [ 00:20:32.633 "44b294cb-e625-51f7-b057-809bc0e2aacb" 00:20:32.633 ], 00:20:32.633 "product_name": "passthru", 00:20:32.633 "block_size": 512, 00:20:32.633 "num_blocks": 65536, 00:20:32.633 "uuid": "44b294cb-e625-51f7-b057-809bc0e2aacb", 00:20:32.633 "assigned_rate_limits": { 00:20:32.633 "rw_ios_per_sec": 0, 00:20:32.633 "rw_mbytes_per_sec": 0, 00:20:32.633 "r_mbytes_per_sec": 0, 00:20:32.633 "w_mbytes_per_sec": 0 00:20:32.633 }, 00:20:32.633 "claimed": true, 00:20:32.633 "claim_type": "exclusive_write", 00:20:32.633 "zoned": false, 00:20:32.633 "supported_io_types": { 00:20:32.633 "read": true, 00:20:32.633 "write": true, 00:20:32.633 "unmap": true, 00:20:32.633 "write_zeroes": true, 00:20:32.633 "flush": true, 00:20:32.633 "reset": true, 00:20:32.633 "compare": false, 00:20:32.633 "compare_and_write": false, 00:20:32.633 "abort": true, 00:20:32.633 "nvme_admin": false, 00:20:32.633 "nvme_io": false 00:20:32.633 }, 00:20:32.633 "memory_domains": [ 00:20:32.633 { 00:20:32.633 "dma_device_id": "system", 00:20:32.633 "dma_device_type": 1 00:20:32.633 }, 00:20:32.633 { 00:20:32.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.633 "dma_device_type": 2 00:20:32.633 } 00:20:32.633 ], 00:20:32.633 "driver_specific": { 00:20:32.633 "passthru": { 00:20:32.633 "name": "pt1", 00:20:32.633 "base_bdev_name": "malloc1" 00:20:32.633 } 00:20:32.633 } 00:20:32.633 }' 00:20:32.633 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:32.892 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:32.892 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:32.892 23:35:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:32.892 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:32.892 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.892 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:32.892 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:33.151 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:33.151 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:33.151 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:33.151 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:33.151 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:33.151 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:33.151 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:33.409 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:33.409 "name": "pt2", 00:20:33.409 "aliases": [ 00:20:33.409 "0ccd48b0-db2c-598d-80ce-c801cabc654b" 00:20:33.409 ], 00:20:33.409 "product_name": "passthru", 00:20:33.409 "block_size": 512, 00:20:33.409 "num_blocks": 65536, 00:20:33.409 "uuid": "0ccd48b0-db2c-598d-80ce-c801cabc654b", 00:20:33.409 "assigned_rate_limits": { 00:20:33.409 "rw_ios_per_sec": 0, 00:20:33.409 "rw_mbytes_per_sec": 0, 00:20:33.409 "r_mbytes_per_sec": 0, 00:20:33.409 "w_mbytes_per_sec": 0 00:20:33.409 }, 00:20:33.409 "claimed": true, 00:20:33.409 "claim_type": "exclusive_write", 00:20:33.409 "zoned": false, 00:20:33.409 "supported_io_types": { 00:20:33.410 "read": true, 00:20:33.410 "write": true, 00:20:33.410 "unmap": true, 00:20:33.410 "write_zeroes": true, 00:20:33.410 "flush": true, 00:20:33.410 "reset": true, 00:20:33.410 "compare": false, 00:20:33.410 "compare_and_write": false, 00:20:33.410 "abort": true, 00:20:33.410 "nvme_admin": false, 00:20:33.410 "nvme_io": false 00:20:33.410 }, 00:20:33.410 "memory_domains": [ 00:20:33.410 { 00:20:33.410 "dma_device_id": "system", 00:20:33.410 "dma_device_type": 1 00:20:33.410 }, 00:20:33.410 { 00:20:33.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.410 "dma_device_type": 2 00:20:33.410 } 00:20:33.410 ], 00:20:33.410 "driver_specific": { 00:20:33.410 "passthru": { 00:20:33.410 "name": "pt2", 00:20:33.410 "base_bdev_name": "malloc2" 00:20:33.410 } 00:20:33.410 } 00:20:33.410 }' 00:20:33.410 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:33.410 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:33.410 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:33.410 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:33.669 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:33.669 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:33.669 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:33.669 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:33.669 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:33.669 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:33.669 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:33.927 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:33.927 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:33.927 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:33.927 23:35:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:34.185 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:34.185 "name": "pt3", 00:20:34.185 "aliases": [ 00:20:34.185 "5c35c5aa-d5d9-5b10-beb0-4936c4a32833" 00:20:34.185 ], 00:20:34.185 "product_name": "passthru", 00:20:34.185 "block_size": 512, 00:20:34.185 "num_blocks": 65536, 00:20:34.185 "uuid": "5c35c5aa-d5d9-5b10-beb0-4936c4a32833", 00:20:34.185 "assigned_rate_limits": { 00:20:34.185 "rw_ios_per_sec": 0, 00:20:34.185 "rw_mbytes_per_sec": 0, 00:20:34.185 "r_mbytes_per_sec": 0, 00:20:34.185 "w_mbytes_per_sec": 0 00:20:34.185 }, 00:20:34.186 "claimed": true, 00:20:34.186 "claim_type": "exclusive_write", 00:20:34.186 "zoned": false, 00:20:34.186 "supported_io_types": { 00:20:34.186 "read": true, 00:20:34.186 "write": true, 00:20:34.186 "unmap": true, 00:20:34.186 "write_zeroes": true, 00:20:34.186 "flush": true, 00:20:34.186 "reset": true, 00:20:34.186 "compare": false, 00:20:34.186 "compare_and_write": false, 00:20:34.186 "abort": true, 00:20:34.186 "nvme_admin": false, 00:20:34.186 "nvme_io": false 00:20:34.186 }, 00:20:34.186 "memory_domains": [ 00:20:34.186 { 00:20:34.186 "dma_device_id": "system", 00:20:34.186 "dma_device_type": 1 00:20:34.186 }, 00:20:34.186 { 00:20:34.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.186 "dma_device_type": 2 00:20:34.186 } 00:20:34.186 ], 00:20:34.186 "driver_specific": { 00:20:34.186 "passthru": { 00:20:34.186 "name": "pt3", 00:20:34.186 "base_bdev_name": "malloc3" 00:20:34.186 } 00:20:34.186 } 00:20:34.186 }' 00:20:34.186 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:34.186 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:34.186 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:34.186 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:34.186 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:34.445 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:34.445 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:34.445 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:34.445 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:34.445 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:34.445 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:34.704 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:34.704 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:34.704 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:34.704 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:34.704 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:34.704 "name": "pt4", 00:20:34.704 "aliases": [ 00:20:34.704 "164a9111-f9ad-58ab-a3b5-6fd7a04f6363" 00:20:34.704 ], 00:20:34.704 "product_name": "passthru", 00:20:34.704 "block_size": 512, 00:20:34.704 "num_blocks": 65536, 00:20:34.704 "uuid": "164a9111-f9ad-58ab-a3b5-6fd7a04f6363", 00:20:34.704 "assigned_rate_limits": { 00:20:34.704 "rw_ios_per_sec": 0, 00:20:34.704 "rw_mbytes_per_sec": 0, 00:20:34.704 "r_mbytes_per_sec": 0, 00:20:34.704 "w_mbytes_per_sec": 0 00:20:34.704 }, 00:20:34.704 "claimed": true, 00:20:34.704 "claim_type": "exclusive_write", 00:20:34.704 "zoned": false, 00:20:34.704 "supported_io_types": { 00:20:34.704 "read": true, 00:20:34.704 "write": true, 00:20:34.704 "unmap": true, 00:20:34.704 "write_zeroes": true, 00:20:34.704 "flush": true, 00:20:34.704 "reset": true, 00:20:34.704 "compare": false, 00:20:34.704 "compare_and_write": false, 00:20:34.704 "abort": true, 00:20:34.704 "nvme_admin": false, 00:20:34.704 "nvme_io": false 00:20:34.704 }, 00:20:34.704 "memory_domains": [ 00:20:34.704 { 00:20:34.704 "dma_device_id": "system", 00:20:34.704 "dma_device_type": 1 00:20:34.704 }, 00:20:34.704 { 00:20:34.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.704 "dma_device_type": 2 00:20:34.704 } 00:20:34.704 ], 00:20:34.704 "driver_specific": { 00:20:34.704 "passthru": { 00:20:34.704 "name": "pt4", 00:20:34.704 "base_bdev_name": "malloc4" 00:20:34.704 } 00:20:34.704 } 00:20:34.704 }' 00:20:34.704 23:35:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:34.963 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:34.963 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:34.963 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:34.963 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:34.963 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:34.963 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:34.963 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:35.222 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.222 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:35.222 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:35.222 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:35.222 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:35.222 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:35.481 [2024-05-14 23:35:58.534253] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.481 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=09d437eb-08e6-4e8b-8b7f-351e4753303d 00:20:35.481 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 09d437eb-08e6-4e8b-8b7f-351e4753303d ']' 00:20:35.481 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:35.481 [2024-05-14 23:35:58.738118] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.481 [2024-05-14 23:35:58.738190] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:35.481 [2024-05-14 23:35:58.738312] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.481 [2024-05-14 23:35:58.738412] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.481 [2024-05-14 23:35:58.738433] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:20:35.481 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.481 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:35.739 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:35.739 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:35.739 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:35.739 23:35:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:36.010 23:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.010 23:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:36.283 23:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.283 23:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:36.542 23:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.542 23:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:36.800 23:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:36.800 23:35:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:37.059 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:37.318 [2024-05-14 23:36:00.382362] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:37.318 [2024-05-14 23:36:00.383862] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:37.318 [2024-05-14 23:36:00.383936] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:37.318 [2024-05-14 23:36:00.383968] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:37.318 [2024-05-14 23:36:00.384004] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:37.318 [2024-05-14 23:36:00.384067] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:37.318 [2024-05-14 23:36:00.384101] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:37.318 [2024-05-14 23:36:00.384200] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:37.318 [2024-05-14 23:36:00.384244] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.318 [2024-05-14 23:36:00.384256] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:20:37.318 request: 00:20:37.318 { 00:20:37.318 "name": "raid_bdev1", 00:20:37.318 "raid_level": "concat", 00:20:37.318 "base_bdevs": [ 00:20:37.318 "malloc1", 00:20:37.318 "malloc2", 00:20:37.318 "malloc3", 00:20:37.318 "malloc4" 00:20:37.318 ], 00:20:37.318 "superblock": false, 00:20:37.318 "strip_size_kb": 64, 00:20:37.318 "method": "bdev_raid_create", 00:20:37.318 "req_id": 1 00:20:37.318 } 00:20:37.318 Got JSON-RPC error response 00:20:37.318 response: 00:20:37.318 { 00:20:37.318 "code": -17, 00:20:37.318 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:37.318 } 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:37.318 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:37.578 [2024-05-14 23:36:00.806389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:37.578 [2024-05-14 23:36:00.806518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.578 [2024-05-14 23:36:00.806582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f780 00:20:37.578 [2024-05-14 23:36:00.806634] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.578 [2024-05-14 23:36:00.808548] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.578 [2024-05-14 23:36:00.808611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:37.578 [2024-05-14 23:36:00.808714] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:37.578 [2024-05-14 23:36:00.808781] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:37.578 pt1 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.578 23:36:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.835 23:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:37.835 "name": "raid_bdev1", 00:20:37.835 "uuid": "09d437eb-08e6-4e8b-8b7f-351e4753303d", 00:20:37.835 "strip_size_kb": 64, 00:20:37.835 "state": "configuring", 00:20:37.835 "raid_level": "concat", 00:20:37.835 "superblock": true, 00:20:37.835 "num_base_bdevs": 4, 00:20:37.835 "num_base_bdevs_discovered": 1, 00:20:37.835 "num_base_bdevs_operational": 4, 00:20:37.835 "base_bdevs_list": [ 00:20:37.835 { 00:20:37.835 "name": "pt1", 00:20:37.835 "uuid": "44b294cb-e625-51f7-b057-809bc0e2aacb", 00:20:37.835 "is_configured": true, 00:20:37.835 "data_offset": 2048, 00:20:37.835 "data_size": 63488 00:20:37.835 }, 00:20:37.835 { 00:20:37.835 "name": null, 00:20:37.835 "uuid": "0ccd48b0-db2c-598d-80ce-c801cabc654b", 00:20:37.835 "is_configured": false, 00:20:37.835 "data_offset": 2048, 00:20:37.835 "data_size": 63488 00:20:37.835 }, 00:20:37.835 { 00:20:37.835 "name": null, 00:20:37.835 "uuid": "5c35c5aa-d5d9-5b10-beb0-4936c4a32833", 00:20:37.836 "is_configured": false, 00:20:37.836 "data_offset": 2048, 00:20:37.836 "data_size": 63488 00:20:37.836 }, 00:20:37.836 { 00:20:37.836 "name": null, 00:20:37.836 "uuid": "164a9111-f9ad-58ab-a3b5-6fd7a04f6363", 00:20:37.836 "is_configured": false, 00:20:37.836 "data_offset": 2048, 00:20:37.836 "data_size": 63488 00:20:37.836 } 00:20:37.836 ] 00:20:37.836 }' 00:20:37.836 23:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:37.836 23:36:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.400 23:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:20:38.400 23:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:38.658 [2024-05-14 23:36:01.870524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:38.658 [2024-05-14 23:36:01.870636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.658 [2024-05-14 23:36:01.870685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:20:38.658 [2024-05-14 23:36:01.870708] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.658 [2024-05-14 23:36:01.871144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.658 [2024-05-14 23:36:01.871212] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:38.658 [2024-05-14 23:36:01.871308] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:38.658 [2024-05-14 23:36:01.871334] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:38.658 pt2 00:20:38.658 23:36:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:38.916 [2024-05-14 23:36:02.110764] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.916 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.176 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.176 "name": "raid_bdev1", 00:20:39.176 "uuid": "09d437eb-08e6-4e8b-8b7f-351e4753303d", 00:20:39.176 "strip_size_kb": 64, 00:20:39.176 "state": "configuring", 00:20:39.176 "raid_level": "concat", 00:20:39.176 "superblock": true, 00:20:39.176 "num_base_bdevs": 4, 00:20:39.176 "num_base_bdevs_discovered": 1, 00:20:39.176 "num_base_bdevs_operational": 4, 00:20:39.176 "base_bdevs_list": [ 00:20:39.176 { 00:20:39.176 "name": "pt1", 00:20:39.176 "uuid": "44b294cb-e625-51f7-b057-809bc0e2aacb", 00:20:39.176 "is_configured": true, 00:20:39.176 "data_offset": 2048, 00:20:39.176 "data_size": 63488 00:20:39.176 }, 00:20:39.176 { 00:20:39.176 "name": null, 00:20:39.176 "uuid": "0ccd48b0-db2c-598d-80ce-c801cabc654b", 00:20:39.176 "is_configured": false, 00:20:39.176 "data_offset": 2048, 00:20:39.176 "data_size": 63488 00:20:39.176 }, 00:20:39.176 { 00:20:39.176 "name": null, 00:20:39.176 "uuid": "5c35c5aa-d5d9-5b10-beb0-4936c4a32833", 00:20:39.176 "is_configured": false, 00:20:39.176 "data_offset": 2048, 00:20:39.176 "data_size": 63488 00:20:39.176 }, 00:20:39.176 { 00:20:39.176 "name": null, 00:20:39.176 "uuid": "164a9111-f9ad-58ab-a3b5-6fd7a04f6363", 00:20:39.176 "is_configured": false, 00:20:39.176 "data_offset": 2048, 00:20:39.176 "data_size": 63488 00:20:39.176 } 00:20:39.176 ] 00:20:39.176 }' 00:20:39.176 23:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.176 23:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.125 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:40.125 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:40.125 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:40.125 [2024-05-14 23:36:03.282913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:40.125 [2024-05-14 23:36:03.283053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.125 [2024-05-14 23:36:03.283132] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032780 00:20:40.125 [2024-05-14 23:36:03.283195] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.125 [2024-05-14 23:36:03.283733] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.125 [2024-05-14 23:36:03.283816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:40.125 [2024-05-14 23:36:03.283938] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:40.125 [2024-05-14 23:36:03.283971] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.125 pt2 00:20:40.125 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:40.125 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:40.125 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:40.384 [2024-05-14 23:36:03.546861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:40.384 [2024-05-14 23:36:03.546984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.384 [2024-05-14 23:36:03.547055] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033c80 00:20:40.384 [2024-05-14 23:36:03.547098] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.384 [2024-05-14 23:36:03.547491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.384 [2024-05-14 23:36:03.547550] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:40.384 [2024-05-14 23:36:03.547657] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:40.384 [2024-05-14 23:36:03.547682] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:40.384 pt3 00:20:40.384 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:40.384 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:40.384 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:40.642 [2024-05-14 23:36:03.830890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:40.642 [2024-05-14 23:36:03.830986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.642 [2024-05-14 23:36:03.831029] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:20:40.642 [2024-05-14 23:36:03.831059] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.642 [2024-05-14 23:36:03.831437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.642 [2024-05-14 23:36:03.831491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:40.642 [2024-05-14 23:36:03.831583] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:40.642 [2024-05-14 23:36:03.831609] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:40.642 [2024-05-14 23:36:03.831702] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:20:40.642 [2024-05-14 23:36:03.831715] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:40.642 [2024-05-14 23:36:03.831790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:40.642 [2024-05-14 23:36:03.832009] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:20:40.642 [2024-05-14 23:36:03.832025] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:20:40.642 [2024-05-14 23:36:03.832120] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.643 pt4 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.643 23:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.916 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.917 "name": "raid_bdev1", 00:20:40.917 "uuid": "09d437eb-08e6-4e8b-8b7f-351e4753303d", 00:20:40.917 "strip_size_kb": 64, 00:20:40.917 "state": "online", 00:20:40.917 "raid_level": "concat", 00:20:40.917 "superblock": true, 00:20:40.917 "num_base_bdevs": 4, 00:20:40.917 "num_base_bdevs_discovered": 4, 00:20:40.917 "num_base_bdevs_operational": 4, 00:20:40.917 "base_bdevs_list": [ 00:20:40.917 { 00:20:40.917 "name": "pt1", 00:20:40.917 "uuid": "44b294cb-e625-51f7-b057-809bc0e2aacb", 00:20:40.917 "is_configured": true, 00:20:40.917 "data_offset": 2048, 00:20:40.917 "data_size": 63488 00:20:40.917 }, 00:20:40.917 { 00:20:40.917 "name": "pt2", 00:20:40.917 "uuid": "0ccd48b0-db2c-598d-80ce-c801cabc654b", 00:20:40.917 "is_configured": true, 00:20:40.917 "data_offset": 2048, 00:20:40.917 "data_size": 63488 00:20:40.917 }, 00:20:40.917 { 00:20:40.917 "name": "pt3", 00:20:40.917 "uuid": "5c35c5aa-d5d9-5b10-beb0-4936c4a32833", 00:20:40.917 "is_configured": true, 00:20:40.917 "data_offset": 2048, 00:20:40.917 "data_size": 63488 00:20:40.917 }, 00:20:40.917 { 00:20:40.917 "name": "pt4", 00:20:40.917 "uuid": "164a9111-f9ad-58ab-a3b5-6fd7a04f6363", 00:20:40.917 "is_configured": true, 00:20:40.917 "data_offset": 2048, 00:20:40.917 "data_size": 63488 00:20:40.917 } 00:20:40.917 ] 00:20:40.917 }' 00:20:40.917 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.917 23:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.856 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:41.856 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:20:41.856 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:41.856 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:41.856 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:41.856 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:41.856 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:41.856 23:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:41.856 [2024-05-14 23:36:04.987280] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.856 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:41.856 "name": "raid_bdev1", 00:20:41.856 "aliases": [ 00:20:41.856 "09d437eb-08e6-4e8b-8b7f-351e4753303d" 00:20:41.856 ], 00:20:41.856 "product_name": "Raid Volume", 00:20:41.856 "block_size": 512, 00:20:41.856 "num_blocks": 253952, 00:20:41.856 "uuid": "09d437eb-08e6-4e8b-8b7f-351e4753303d", 00:20:41.856 "assigned_rate_limits": { 00:20:41.856 "rw_ios_per_sec": 0, 00:20:41.856 "rw_mbytes_per_sec": 0, 00:20:41.856 "r_mbytes_per_sec": 0, 00:20:41.856 "w_mbytes_per_sec": 0 00:20:41.856 }, 00:20:41.856 "claimed": false, 00:20:41.856 "zoned": false, 00:20:41.856 "supported_io_types": { 00:20:41.856 "read": true, 00:20:41.856 "write": true, 00:20:41.856 "unmap": true, 00:20:41.856 "write_zeroes": true, 00:20:41.856 "flush": true, 00:20:41.856 "reset": true, 00:20:41.856 "compare": false, 00:20:41.856 "compare_and_write": false, 00:20:41.856 "abort": false, 00:20:41.856 "nvme_admin": false, 00:20:41.856 "nvme_io": false 00:20:41.856 }, 00:20:41.856 "memory_domains": [ 00:20:41.856 { 00:20:41.856 "dma_device_id": "system", 00:20:41.856 "dma_device_type": 1 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.856 "dma_device_type": 2 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "dma_device_id": "system", 00:20:41.856 "dma_device_type": 1 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.856 "dma_device_type": 2 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "dma_device_id": "system", 00:20:41.856 "dma_device_type": 1 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.856 "dma_device_type": 2 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "dma_device_id": "system", 00:20:41.856 "dma_device_type": 1 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.856 "dma_device_type": 2 00:20:41.856 } 00:20:41.856 ], 00:20:41.856 "driver_specific": { 00:20:41.856 "raid": { 00:20:41.856 "uuid": "09d437eb-08e6-4e8b-8b7f-351e4753303d", 00:20:41.856 "strip_size_kb": 64, 00:20:41.856 "state": "online", 00:20:41.856 "raid_level": "concat", 00:20:41.856 "superblock": true, 00:20:41.856 "num_base_bdevs": 4, 00:20:41.856 "num_base_bdevs_discovered": 4, 00:20:41.856 "num_base_bdevs_operational": 4, 00:20:41.856 "base_bdevs_list": [ 00:20:41.856 { 00:20:41.856 "name": "pt1", 00:20:41.856 "uuid": "44b294cb-e625-51f7-b057-809bc0e2aacb", 00:20:41.856 "is_configured": true, 00:20:41.856 "data_offset": 2048, 00:20:41.856 "data_size": 63488 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "name": "pt2", 00:20:41.856 "uuid": "0ccd48b0-db2c-598d-80ce-c801cabc654b", 00:20:41.856 "is_configured": true, 00:20:41.856 "data_offset": 2048, 00:20:41.856 "data_size": 63488 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "name": "pt3", 00:20:41.856 "uuid": "5c35c5aa-d5d9-5b10-beb0-4936c4a32833", 00:20:41.856 "is_configured": true, 00:20:41.856 "data_offset": 2048, 00:20:41.856 "data_size": 63488 00:20:41.856 }, 00:20:41.856 { 00:20:41.856 "name": "pt4", 00:20:41.856 "uuid": "164a9111-f9ad-58ab-a3b5-6fd7a04f6363", 00:20:41.856 "is_configured": true, 00:20:41.856 "data_offset": 2048, 00:20:41.856 "data_size": 63488 00:20:41.856 } 00:20:41.856 ] 00:20:41.856 } 00:20:41.856 } 00:20:41.856 }' 00:20:41.856 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:41.856 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:20:41.856 pt2 00:20:41.856 pt3 00:20:41.856 pt4' 00:20:41.856 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:41.856 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:41.856 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:42.115 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:42.115 "name": "pt1", 00:20:42.115 "aliases": [ 00:20:42.115 "44b294cb-e625-51f7-b057-809bc0e2aacb" 00:20:42.115 ], 00:20:42.115 "product_name": "passthru", 00:20:42.115 "block_size": 512, 00:20:42.115 "num_blocks": 65536, 00:20:42.115 "uuid": "44b294cb-e625-51f7-b057-809bc0e2aacb", 00:20:42.115 "assigned_rate_limits": { 00:20:42.115 "rw_ios_per_sec": 0, 00:20:42.115 "rw_mbytes_per_sec": 0, 00:20:42.115 "r_mbytes_per_sec": 0, 00:20:42.115 "w_mbytes_per_sec": 0 00:20:42.115 }, 00:20:42.115 "claimed": true, 00:20:42.115 "claim_type": "exclusive_write", 00:20:42.115 "zoned": false, 00:20:42.115 "supported_io_types": { 00:20:42.115 "read": true, 00:20:42.115 "write": true, 00:20:42.115 "unmap": true, 00:20:42.115 "write_zeroes": true, 00:20:42.115 "flush": true, 00:20:42.115 "reset": true, 00:20:42.115 "compare": false, 00:20:42.115 "compare_and_write": false, 00:20:42.115 "abort": true, 00:20:42.115 "nvme_admin": false, 00:20:42.115 "nvme_io": false 00:20:42.115 }, 00:20:42.115 "memory_domains": [ 00:20:42.115 { 00:20:42.115 "dma_device_id": "system", 00:20:42.115 "dma_device_type": 1 00:20:42.115 }, 00:20:42.115 { 00:20:42.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.115 "dma_device_type": 2 00:20:42.115 } 00:20:42.115 ], 00:20:42.115 "driver_specific": { 00:20:42.115 "passthru": { 00:20:42.115 "name": "pt1", 00:20:42.115 "base_bdev_name": "malloc1" 00:20:42.115 } 00:20:42.115 } 00:20:42.115 }' 00:20:42.115 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:42.115 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:42.115 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:42.115 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:42.374 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:42.677 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:42.677 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:42.677 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:42.677 "name": "pt2", 00:20:42.677 "aliases": [ 00:20:42.677 "0ccd48b0-db2c-598d-80ce-c801cabc654b" 00:20:42.677 ], 00:20:42.677 "product_name": "passthru", 00:20:42.677 "block_size": 512, 00:20:42.677 "num_blocks": 65536, 00:20:42.677 "uuid": "0ccd48b0-db2c-598d-80ce-c801cabc654b", 00:20:42.677 "assigned_rate_limits": { 00:20:42.677 "rw_ios_per_sec": 0, 00:20:42.677 "rw_mbytes_per_sec": 0, 00:20:42.677 "r_mbytes_per_sec": 0, 00:20:42.677 "w_mbytes_per_sec": 0 00:20:42.677 }, 00:20:42.677 "claimed": true, 00:20:42.677 "claim_type": "exclusive_write", 00:20:42.677 "zoned": false, 00:20:42.677 "supported_io_types": { 00:20:42.677 "read": true, 00:20:42.677 "write": true, 00:20:42.677 "unmap": true, 00:20:42.677 "write_zeroes": true, 00:20:42.677 "flush": true, 00:20:42.677 "reset": true, 00:20:42.677 "compare": false, 00:20:42.677 "compare_and_write": false, 00:20:42.677 "abort": true, 00:20:42.677 "nvme_admin": false, 00:20:42.677 "nvme_io": false 00:20:42.677 }, 00:20:42.677 "memory_domains": [ 00:20:42.677 { 00:20:42.677 "dma_device_id": "system", 00:20:42.677 "dma_device_type": 1 00:20:42.677 }, 00:20:42.677 { 00:20:42.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.677 "dma_device_type": 2 00:20:42.677 } 00:20:42.677 ], 00:20:42.677 "driver_specific": { 00:20:42.677 "passthru": { 00:20:42.677 "name": "pt2", 00:20:42.677 "base_bdev_name": "malloc2" 00:20:42.677 } 00:20:42.677 } 00:20:42.677 }' 00:20:42.677 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:42.677 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:42.677 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:42.677 23:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:42.936 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:42.936 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:42.936 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:42.936 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:42.936 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:42.936 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.194 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.194 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:43.194 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:43.194 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:43.194 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:43.453 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:43.453 "name": "pt3", 00:20:43.453 "aliases": [ 00:20:43.453 "5c35c5aa-d5d9-5b10-beb0-4936c4a32833" 00:20:43.453 ], 00:20:43.453 "product_name": "passthru", 00:20:43.453 "block_size": 512, 00:20:43.453 "num_blocks": 65536, 00:20:43.453 "uuid": "5c35c5aa-d5d9-5b10-beb0-4936c4a32833", 00:20:43.453 "assigned_rate_limits": { 00:20:43.453 "rw_ios_per_sec": 0, 00:20:43.453 "rw_mbytes_per_sec": 0, 00:20:43.453 "r_mbytes_per_sec": 0, 00:20:43.453 "w_mbytes_per_sec": 0 00:20:43.453 }, 00:20:43.453 "claimed": true, 00:20:43.453 "claim_type": "exclusive_write", 00:20:43.453 "zoned": false, 00:20:43.453 "supported_io_types": { 00:20:43.453 "read": true, 00:20:43.453 "write": true, 00:20:43.453 "unmap": true, 00:20:43.453 "write_zeroes": true, 00:20:43.453 "flush": true, 00:20:43.453 "reset": true, 00:20:43.453 "compare": false, 00:20:43.453 "compare_and_write": false, 00:20:43.453 "abort": true, 00:20:43.453 "nvme_admin": false, 00:20:43.453 "nvme_io": false 00:20:43.453 }, 00:20:43.453 "memory_domains": [ 00:20:43.453 { 00:20:43.453 "dma_device_id": "system", 00:20:43.453 "dma_device_type": 1 00:20:43.453 }, 00:20:43.453 { 00:20:43.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.453 "dma_device_type": 2 00:20:43.453 } 00:20:43.453 ], 00:20:43.453 "driver_specific": { 00:20:43.453 "passthru": { 00:20:43.453 "name": "pt3", 00:20:43.453 "base_bdev_name": "malloc3" 00:20:43.453 } 00:20:43.453 } 00:20:43.453 }' 00:20:43.453 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:43.453 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:43.453 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:43.453 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:43.453 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:43.711 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.711 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:43.711 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:43.711 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.711 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.711 23:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:43.969 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:43.969 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:43.969 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:43.969 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:44.228 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:44.228 "name": "pt4", 00:20:44.228 "aliases": [ 00:20:44.228 "164a9111-f9ad-58ab-a3b5-6fd7a04f6363" 00:20:44.228 ], 00:20:44.228 "product_name": "passthru", 00:20:44.228 "block_size": 512, 00:20:44.228 "num_blocks": 65536, 00:20:44.228 "uuid": "164a9111-f9ad-58ab-a3b5-6fd7a04f6363", 00:20:44.228 "assigned_rate_limits": { 00:20:44.228 "rw_ios_per_sec": 0, 00:20:44.228 "rw_mbytes_per_sec": 0, 00:20:44.228 "r_mbytes_per_sec": 0, 00:20:44.228 "w_mbytes_per_sec": 0 00:20:44.228 }, 00:20:44.228 "claimed": true, 00:20:44.228 "claim_type": "exclusive_write", 00:20:44.228 "zoned": false, 00:20:44.228 "supported_io_types": { 00:20:44.228 "read": true, 00:20:44.228 "write": true, 00:20:44.228 "unmap": true, 00:20:44.228 "write_zeroes": true, 00:20:44.228 "flush": true, 00:20:44.228 "reset": true, 00:20:44.228 "compare": false, 00:20:44.228 "compare_and_write": false, 00:20:44.228 "abort": true, 00:20:44.228 "nvme_admin": false, 00:20:44.228 "nvme_io": false 00:20:44.228 }, 00:20:44.228 "memory_domains": [ 00:20:44.228 { 00:20:44.228 "dma_device_id": "system", 00:20:44.228 "dma_device_type": 1 00:20:44.228 }, 00:20:44.228 { 00:20:44.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.228 "dma_device_type": 2 00:20:44.228 } 00:20:44.228 ], 00:20:44.228 "driver_specific": { 00:20:44.228 "passthru": { 00:20:44.228 "name": "pt4", 00:20:44.228 "base_bdev_name": "malloc4" 00:20:44.228 } 00:20:44.228 } 00:20:44.228 }' 00:20:44.228 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:44.228 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:44.228 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:44.229 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:44.229 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:44.487 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:44.745 [2024-05-14 23:36:07.919779] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 09d437eb-08e6-4e8b-8b7f-351e4753303d '!=' 09d437eb-08e6-4e8b-8b7f-351e4753303d ']' 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 69227 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 69227 ']' 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 69227 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69227 00:20:44.745 killing process with pid 69227 00:20:44.745 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:44.746 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:44.746 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69227' 00:20:44.746 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 69227 00:20:44.746 23:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 69227 00:20:44.746 [2024-05-14 23:36:07.968309] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:44.746 [2024-05-14 23:36:07.968365] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.746 [2024-05-14 23:36:07.968444] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.746 [2024-05-14 23:36:07.968455] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:20:45.003 [2024-05-14 23:36:08.255663] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:46.411 23:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:20:46.411 00:20:46.411 real 0m17.932s 00:20:46.411 user 0m32.891s 00:20:46.411 sys 0m1.830s 00:20:46.411 23:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:46.411 23:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.411 ************************************ 00:20:46.411 END TEST raid_superblock_test 00:20:46.411 ************************************ 00:20:46.411 23:36:09 bdev_raid -- bdev/bdev_raid.sh@814 -- # for level in raid0 concat raid1 00:20:46.411 23:36:09 bdev_raid -- bdev/bdev_raid.sh@815 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:20:46.411 23:36:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:46.411 23:36:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:46.411 23:36:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:46.411 ************************************ 00:20:46.411 START TEST raid_state_function_test 00:20:46.411 ************************************ 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:20:46.411 Process raid pid: 69783 00:20:46.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=69783 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 69783' 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 69783 /var/tmp/spdk-raid.sock 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 69783 ']' 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:46.411 23:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.411 [2024-05-14 23:36:09.579749] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:20:46.411 [2024-05-14 23:36:09.579958] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.670 [2024-05-14 23:36:09.739276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.670 [2024-05-14 23:36:09.949563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.929 [2024-05-14 23:36:10.147622] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:47.188 23:36:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:47.188 23:36:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:20:47.188 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:47.447 [2024-05-14 23:36:10.615310] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:47.447 [2024-05-14 23:36:10.615408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:47.447 [2024-05-14 23:36:10.615440] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:47.447 [2024-05-14 23:36:10.615460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:47.447 [2024-05-14 23:36:10.615468] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:47.447 [2024-05-14 23:36:10.615511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:47.447 [2024-05-14 23:36:10.615522] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:47.447 [2024-05-14 23:36:10.615549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.447 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.707 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:47.707 "name": "Existed_Raid", 00:20:47.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.707 "strip_size_kb": 0, 00:20:47.707 "state": "configuring", 00:20:47.707 "raid_level": "raid1", 00:20:47.707 "superblock": false, 00:20:47.707 "num_base_bdevs": 4, 00:20:47.707 "num_base_bdevs_discovered": 0, 00:20:47.707 "num_base_bdevs_operational": 4, 00:20:47.707 "base_bdevs_list": [ 00:20:47.707 { 00:20:47.707 "name": "BaseBdev1", 00:20:47.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.707 "is_configured": false, 00:20:47.707 "data_offset": 0, 00:20:47.707 "data_size": 0 00:20:47.707 }, 00:20:47.707 { 00:20:47.707 "name": "BaseBdev2", 00:20:47.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.707 "is_configured": false, 00:20:47.707 "data_offset": 0, 00:20:47.707 "data_size": 0 00:20:47.707 }, 00:20:47.707 { 00:20:47.707 "name": "BaseBdev3", 00:20:47.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.707 "is_configured": false, 00:20:47.707 "data_offset": 0, 00:20:47.707 "data_size": 0 00:20:47.707 }, 00:20:47.707 { 00:20:47.707 "name": "BaseBdev4", 00:20:47.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.707 "is_configured": false, 00:20:47.707 "data_offset": 0, 00:20:47.707 "data_size": 0 00:20:47.707 } 00:20:47.707 ] 00:20:47.707 }' 00:20:47.707 23:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:47.707 23:36:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.276 23:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:48.535 [2024-05-14 23:36:11.631397] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:48.535 [2024-05-14 23:36:11.631441] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:20:48.535 23:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:48.535 [2024-05-14 23:36:11.811476] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:48.535 [2024-05-14 23:36:11.811592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:48.535 [2024-05-14 23:36:11.811632] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:48.535 [2024-05-14 23:36:11.811675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:48.535 [2024-05-14 23:36:11.811693] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:48.535 [2024-05-14 23:36:11.811720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:48.535 [2024-05-14 23:36:11.811734] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:48.535 [2024-05-14 23:36:11.811771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:48.794 23:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:48.794 [2024-05-14 23:36:12.079000] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.794 BaseBdev1 00:20:49.053 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:20:49.053 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:49.053 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:49.053 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:49.053 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:49.053 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:49.053 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:49.053 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:49.311 [ 00:20:49.311 { 00:20:49.311 "name": "BaseBdev1", 00:20:49.311 "aliases": [ 00:20:49.311 "fe9355f7-9772-4843-8509-ff5a1010c171" 00:20:49.311 ], 00:20:49.311 "product_name": "Malloc disk", 00:20:49.311 "block_size": 512, 00:20:49.311 "num_blocks": 65536, 00:20:49.311 "uuid": "fe9355f7-9772-4843-8509-ff5a1010c171", 00:20:49.311 "assigned_rate_limits": { 00:20:49.311 "rw_ios_per_sec": 0, 00:20:49.311 "rw_mbytes_per_sec": 0, 00:20:49.311 "r_mbytes_per_sec": 0, 00:20:49.311 "w_mbytes_per_sec": 0 00:20:49.311 }, 00:20:49.311 "claimed": true, 00:20:49.311 "claim_type": "exclusive_write", 00:20:49.311 "zoned": false, 00:20:49.311 "supported_io_types": { 00:20:49.311 "read": true, 00:20:49.311 "write": true, 00:20:49.311 "unmap": true, 00:20:49.311 "write_zeroes": true, 00:20:49.311 "flush": true, 00:20:49.311 "reset": true, 00:20:49.311 "compare": false, 00:20:49.311 "compare_and_write": false, 00:20:49.311 "abort": true, 00:20:49.311 "nvme_admin": false, 00:20:49.311 "nvme_io": false 00:20:49.311 }, 00:20:49.311 "memory_domains": [ 00:20:49.311 { 00:20:49.311 "dma_device_id": "system", 00:20:49.311 "dma_device_type": 1 00:20:49.311 }, 00:20:49.311 { 00:20:49.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.311 "dma_device_type": 2 00:20:49.311 } 00:20:49.311 ], 00:20:49.311 "driver_specific": {} 00:20:49.311 } 00:20:49.311 ] 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.311 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.569 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:49.569 "name": "Existed_Raid", 00:20:49.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.569 "strip_size_kb": 0, 00:20:49.569 "state": "configuring", 00:20:49.569 "raid_level": "raid1", 00:20:49.569 "superblock": false, 00:20:49.569 "num_base_bdevs": 4, 00:20:49.569 "num_base_bdevs_discovered": 1, 00:20:49.569 "num_base_bdevs_operational": 4, 00:20:49.569 "base_bdevs_list": [ 00:20:49.569 { 00:20:49.569 "name": "BaseBdev1", 00:20:49.569 "uuid": "fe9355f7-9772-4843-8509-ff5a1010c171", 00:20:49.569 "is_configured": true, 00:20:49.569 "data_offset": 0, 00:20:49.569 "data_size": 65536 00:20:49.569 }, 00:20:49.569 { 00:20:49.569 "name": "BaseBdev2", 00:20:49.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.569 "is_configured": false, 00:20:49.569 "data_offset": 0, 00:20:49.569 "data_size": 0 00:20:49.569 }, 00:20:49.569 { 00:20:49.569 "name": "BaseBdev3", 00:20:49.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.569 "is_configured": false, 00:20:49.569 "data_offset": 0, 00:20:49.569 "data_size": 0 00:20:49.569 }, 00:20:49.569 { 00:20:49.569 "name": "BaseBdev4", 00:20:49.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.569 "is_configured": false, 00:20:49.569 "data_offset": 0, 00:20:49.569 "data_size": 0 00:20:49.569 } 00:20:49.569 ] 00:20:49.569 }' 00:20:49.569 23:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:49.569 23:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.135 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:50.391 [2024-05-14 23:36:13.511266] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:50.391 [2024-05-14 23:36:13.511321] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:20:50.391 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:50.648 [2024-05-14 23:36:13.731341] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.648 [2024-05-14 23:36:13.732874] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:50.648 [2024-05-14 23:36:13.732950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:50.648 [2024-05-14 23:36:13.732975] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:50.648 [2024-05-14 23:36:13.733004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:50.648 [2024-05-14 23:36:13.733015] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:50.648 [2024-05-14 23:36:13.733032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.648 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.907 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.907 "name": "Existed_Raid", 00:20:50.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.907 "strip_size_kb": 0, 00:20:50.907 "state": "configuring", 00:20:50.907 "raid_level": "raid1", 00:20:50.907 "superblock": false, 00:20:50.907 "num_base_bdevs": 4, 00:20:50.907 "num_base_bdevs_discovered": 1, 00:20:50.907 "num_base_bdevs_operational": 4, 00:20:50.907 "base_bdevs_list": [ 00:20:50.907 { 00:20:50.907 "name": "BaseBdev1", 00:20:50.907 "uuid": "fe9355f7-9772-4843-8509-ff5a1010c171", 00:20:50.907 "is_configured": true, 00:20:50.907 "data_offset": 0, 00:20:50.907 "data_size": 65536 00:20:50.907 }, 00:20:50.907 { 00:20:50.907 "name": "BaseBdev2", 00:20:50.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.907 "is_configured": false, 00:20:50.907 "data_offset": 0, 00:20:50.907 "data_size": 0 00:20:50.907 }, 00:20:50.907 { 00:20:50.907 "name": "BaseBdev3", 00:20:50.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.907 "is_configured": false, 00:20:50.907 "data_offset": 0, 00:20:50.907 "data_size": 0 00:20:50.907 }, 00:20:50.907 { 00:20:50.907 "name": "BaseBdev4", 00:20:50.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.907 "is_configured": false, 00:20:50.907 "data_offset": 0, 00:20:50.907 "data_size": 0 00:20:50.907 } 00:20:50.907 ] 00:20:50.907 }' 00:20:50.907 23:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.907 23:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.474 23:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:51.733 [2024-05-14 23:36:14.918171] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:51.733 BaseBdev2 00:20:51.733 23:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:20:51.733 23:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:51.733 23:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:51.733 23:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:51.733 23:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:51.733 23:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:51.733 23:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:51.992 23:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:52.250 [ 00:20:52.250 { 00:20:52.250 "name": "BaseBdev2", 00:20:52.250 "aliases": [ 00:20:52.250 "06116ea4-c699-435c-b1d0-984609c10128" 00:20:52.250 ], 00:20:52.250 "product_name": "Malloc disk", 00:20:52.250 "block_size": 512, 00:20:52.250 "num_blocks": 65536, 00:20:52.250 "uuid": "06116ea4-c699-435c-b1d0-984609c10128", 00:20:52.250 "assigned_rate_limits": { 00:20:52.250 "rw_ios_per_sec": 0, 00:20:52.250 "rw_mbytes_per_sec": 0, 00:20:52.250 "r_mbytes_per_sec": 0, 00:20:52.250 "w_mbytes_per_sec": 0 00:20:52.250 }, 00:20:52.250 "claimed": true, 00:20:52.250 "claim_type": "exclusive_write", 00:20:52.250 "zoned": false, 00:20:52.250 "supported_io_types": { 00:20:52.250 "read": true, 00:20:52.250 "write": true, 00:20:52.250 "unmap": true, 00:20:52.250 "write_zeroes": true, 00:20:52.250 "flush": true, 00:20:52.250 "reset": true, 00:20:52.250 "compare": false, 00:20:52.250 "compare_and_write": false, 00:20:52.250 "abort": true, 00:20:52.250 "nvme_admin": false, 00:20:52.250 "nvme_io": false 00:20:52.251 }, 00:20:52.251 "memory_domains": [ 00:20:52.251 { 00:20:52.251 "dma_device_id": "system", 00:20:52.251 "dma_device_type": 1 00:20:52.251 }, 00:20:52.251 { 00:20:52.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.251 "dma_device_type": 2 00:20:52.251 } 00:20:52.251 ], 00:20:52.251 "driver_specific": {} 00:20:52.251 } 00:20:52.251 ] 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.251 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.509 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.509 "name": "Existed_Raid", 00:20:52.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.509 "strip_size_kb": 0, 00:20:52.509 "state": "configuring", 00:20:52.509 "raid_level": "raid1", 00:20:52.509 "superblock": false, 00:20:52.509 "num_base_bdevs": 4, 00:20:52.509 "num_base_bdevs_discovered": 2, 00:20:52.509 "num_base_bdevs_operational": 4, 00:20:52.509 "base_bdevs_list": [ 00:20:52.509 { 00:20:52.509 "name": "BaseBdev1", 00:20:52.509 "uuid": "fe9355f7-9772-4843-8509-ff5a1010c171", 00:20:52.509 "is_configured": true, 00:20:52.509 "data_offset": 0, 00:20:52.509 "data_size": 65536 00:20:52.509 }, 00:20:52.509 { 00:20:52.509 "name": "BaseBdev2", 00:20:52.509 "uuid": "06116ea4-c699-435c-b1d0-984609c10128", 00:20:52.509 "is_configured": true, 00:20:52.509 "data_offset": 0, 00:20:52.509 "data_size": 65536 00:20:52.509 }, 00:20:52.509 { 00:20:52.509 "name": "BaseBdev3", 00:20:52.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.509 "is_configured": false, 00:20:52.509 "data_offset": 0, 00:20:52.509 "data_size": 0 00:20:52.509 }, 00:20:52.509 { 00:20:52.509 "name": "BaseBdev4", 00:20:52.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.509 "is_configured": false, 00:20:52.509 "data_offset": 0, 00:20:52.509 "data_size": 0 00:20:52.509 } 00:20:52.509 ] 00:20:52.509 }' 00:20:52.509 23:36:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.509 23:36:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.076 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:53.334 BaseBdev3 00:20:53.334 [2024-05-14 23:36:16.503360] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:53.334 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:20:53.334 23:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:53.334 23:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:53.334 23:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:53.334 23:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:53.334 23:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:53.334 23:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:53.592 23:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:53.851 [ 00:20:53.851 { 00:20:53.851 "name": "BaseBdev3", 00:20:53.851 "aliases": [ 00:20:53.851 "4e0ab35a-c075-4807-8d2e-631a4e8bf835" 00:20:53.851 ], 00:20:53.851 "product_name": "Malloc disk", 00:20:53.851 "block_size": 512, 00:20:53.851 "num_blocks": 65536, 00:20:53.851 "uuid": "4e0ab35a-c075-4807-8d2e-631a4e8bf835", 00:20:53.851 "assigned_rate_limits": { 00:20:53.851 "rw_ios_per_sec": 0, 00:20:53.851 "rw_mbytes_per_sec": 0, 00:20:53.851 "r_mbytes_per_sec": 0, 00:20:53.851 "w_mbytes_per_sec": 0 00:20:53.851 }, 00:20:53.851 "claimed": true, 00:20:53.851 "claim_type": "exclusive_write", 00:20:53.851 "zoned": false, 00:20:53.851 "supported_io_types": { 00:20:53.851 "read": true, 00:20:53.851 "write": true, 00:20:53.851 "unmap": true, 00:20:53.851 "write_zeroes": true, 00:20:53.851 "flush": true, 00:20:53.851 "reset": true, 00:20:53.851 "compare": false, 00:20:53.851 "compare_and_write": false, 00:20:53.851 "abort": true, 00:20:53.851 "nvme_admin": false, 00:20:53.851 "nvme_io": false 00:20:53.851 }, 00:20:53.851 "memory_domains": [ 00:20:53.851 { 00:20:53.851 "dma_device_id": "system", 00:20:53.851 "dma_device_type": 1 00:20:53.851 }, 00:20:53.851 { 00:20:53.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.851 "dma_device_type": 2 00:20:53.851 } 00:20:53.851 ], 00:20:53.851 "driver_specific": {} 00:20:53.851 } 00:20:53.851 ] 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.851 23:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.109 23:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:54.109 "name": "Existed_Raid", 00:20:54.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.109 "strip_size_kb": 0, 00:20:54.109 "state": "configuring", 00:20:54.109 "raid_level": "raid1", 00:20:54.109 "superblock": false, 00:20:54.109 "num_base_bdevs": 4, 00:20:54.109 "num_base_bdevs_discovered": 3, 00:20:54.109 "num_base_bdevs_operational": 4, 00:20:54.109 "base_bdevs_list": [ 00:20:54.109 { 00:20:54.109 "name": "BaseBdev1", 00:20:54.109 "uuid": "fe9355f7-9772-4843-8509-ff5a1010c171", 00:20:54.109 "is_configured": true, 00:20:54.109 "data_offset": 0, 00:20:54.109 "data_size": 65536 00:20:54.109 }, 00:20:54.109 { 00:20:54.109 "name": "BaseBdev2", 00:20:54.109 "uuid": "06116ea4-c699-435c-b1d0-984609c10128", 00:20:54.109 "is_configured": true, 00:20:54.109 "data_offset": 0, 00:20:54.109 "data_size": 65536 00:20:54.109 }, 00:20:54.109 { 00:20:54.109 "name": "BaseBdev3", 00:20:54.109 "uuid": "4e0ab35a-c075-4807-8d2e-631a4e8bf835", 00:20:54.109 "is_configured": true, 00:20:54.109 "data_offset": 0, 00:20:54.109 "data_size": 65536 00:20:54.109 }, 00:20:54.109 { 00:20:54.109 "name": "BaseBdev4", 00:20:54.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.109 "is_configured": false, 00:20:54.109 "data_offset": 0, 00:20:54.109 "data_size": 0 00:20:54.109 } 00:20:54.109 ] 00:20:54.109 }' 00:20:54.109 23:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:54.109 23:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.676 23:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:54.974 [2024-05-14 23:36:18.125705] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:54.974 [2024-05-14 23:36:18.125757] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:20:54.974 [2024-05-14 23:36:18.125767] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:54.974 [2024-05-14 23:36:18.125876] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:54.974 [2024-05-14 23:36:18.126103] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:20:54.974 [2024-05-14 23:36:18.126117] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:20:54.974 BaseBdev4 00:20:54.974 [2024-05-14 23:36:18.126579] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.974 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:20:54.974 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:20:54.974 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:54.974 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:54.974 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:54.974 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:54.974 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:55.236 [ 00:20:55.236 { 00:20:55.236 "name": "BaseBdev4", 00:20:55.236 "aliases": [ 00:20:55.236 "67dd64b8-f33a-480b-8460-dcf125131bb4" 00:20:55.236 ], 00:20:55.236 "product_name": "Malloc disk", 00:20:55.236 "block_size": 512, 00:20:55.236 "num_blocks": 65536, 00:20:55.236 "uuid": "67dd64b8-f33a-480b-8460-dcf125131bb4", 00:20:55.236 "assigned_rate_limits": { 00:20:55.236 "rw_ios_per_sec": 0, 00:20:55.236 "rw_mbytes_per_sec": 0, 00:20:55.236 "r_mbytes_per_sec": 0, 00:20:55.236 "w_mbytes_per_sec": 0 00:20:55.236 }, 00:20:55.236 "claimed": true, 00:20:55.236 "claim_type": "exclusive_write", 00:20:55.236 "zoned": false, 00:20:55.236 "supported_io_types": { 00:20:55.236 "read": true, 00:20:55.236 "write": true, 00:20:55.236 "unmap": true, 00:20:55.236 "write_zeroes": true, 00:20:55.236 "flush": true, 00:20:55.236 "reset": true, 00:20:55.236 "compare": false, 00:20:55.236 "compare_and_write": false, 00:20:55.236 "abort": true, 00:20:55.236 "nvme_admin": false, 00:20:55.236 "nvme_io": false 00:20:55.236 }, 00:20:55.236 "memory_domains": [ 00:20:55.236 { 00:20:55.236 "dma_device_id": "system", 00:20:55.236 "dma_device_type": 1 00:20:55.236 }, 00:20:55.236 { 00:20:55.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.236 "dma_device_type": 2 00:20:55.236 } 00:20:55.236 ], 00:20:55.236 "driver_specific": {} 00:20:55.236 } 00:20:55.236 ] 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.236 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.495 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.495 "name": "Existed_Raid", 00:20:55.495 "uuid": "38e2caa9-74b0-4899-ba4b-b51889f9427e", 00:20:55.495 "strip_size_kb": 0, 00:20:55.495 "state": "online", 00:20:55.495 "raid_level": "raid1", 00:20:55.495 "superblock": false, 00:20:55.495 "num_base_bdevs": 4, 00:20:55.495 "num_base_bdevs_discovered": 4, 00:20:55.495 "num_base_bdevs_operational": 4, 00:20:55.495 "base_bdevs_list": [ 00:20:55.495 { 00:20:55.495 "name": "BaseBdev1", 00:20:55.495 "uuid": "fe9355f7-9772-4843-8509-ff5a1010c171", 00:20:55.495 "is_configured": true, 00:20:55.495 "data_offset": 0, 00:20:55.495 "data_size": 65536 00:20:55.495 }, 00:20:55.495 { 00:20:55.495 "name": "BaseBdev2", 00:20:55.495 "uuid": "06116ea4-c699-435c-b1d0-984609c10128", 00:20:55.495 "is_configured": true, 00:20:55.495 "data_offset": 0, 00:20:55.495 "data_size": 65536 00:20:55.495 }, 00:20:55.495 { 00:20:55.495 "name": "BaseBdev3", 00:20:55.495 "uuid": "4e0ab35a-c075-4807-8d2e-631a4e8bf835", 00:20:55.495 "is_configured": true, 00:20:55.495 "data_offset": 0, 00:20:55.495 "data_size": 65536 00:20:55.495 }, 00:20:55.495 { 00:20:55.495 "name": "BaseBdev4", 00:20:55.495 "uuid": "67dd64b8-f33a-480b-8460-dcf125131bb4", 00:20:55.495 "is_configured": true, 00:20:55.495 "data_offset": 0, 00:20:55.495 "data_size": 65536 00:20:55.495 } 00:20:55.495 ] 00:20:55.495 }' 00:20:55.495 23:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.495 23:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.064 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:20:56.064 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:56.064 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:56.064 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:56.064 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:56.064 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:56.064 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:56.064 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:56.323 [2024-05-14 23:36:19.542174] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.323 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:56.323 "name": "Existed_Raid", 00:20:56.323 "aliases": [ 00:20:56.323 "38e2caa9-74b0-4899-ba4b-b51889f9427e" 00:20:56.323 ], 00:20:56.323 "product_name": "Raid Volume", 00:20:56.323 "block_size": 512, 00:20:56.323 "num_blocks": 65536, 00:20:56.323 "uuid": "38e2caa9-74b0-4899-ba4b-b51889f9427e", 00:20:56.323 "assigned_rate_limits": { 00:20:56.323 "rw_ios_per_sec": 0, 00:20:56.323 "rw_mbytes_per_sec": 0, 00:20:56.323 "r_mbytes_per_sec": 0, 00:20:56.323 "w_mbytes_per_sec": 0 00:20:56.323 }, 00:20:56.323 "claimed": false, 00:20:56.323 "zoned": false, 00:20:56.323 "supported_io_types": { 00:20:56.323 "read": true, 00:20:56.323 "write": true, 00:20:56.323 "unmap": false, 00:20:56.323 "write_zeroes": true, 00:20:56.323 "flush": false, 00:20:56.323 "reset": true, 00:20:56.324 "compare": false, 00:20:56.324 "compare_and_write": false, 00:20:56.324 "abort": false, 00:20:56.324 "nvme_admin": false, 00:20:56.324 "nvme_io": false 00:20:56.324 }, 00:20:56.324 "memory_domains": [ 00:20:56.324 { 00:20:56.324 "dma_device_id": "system", 00:20:56.324 "dma_device_type": 1 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.324 "dma_device_type": 2 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "dma_device_id": "system", 00:20:56.324 "dma_device_type": 1 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.324 "dma_device_type": 2 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "dma_device_id": "system", 00:20:56.324 "dma_device_type": 1 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.324 "dma_device_type": 2 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "dma_device_id": "system", 00:20:56.324 "dma_device_type": 1 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.324 "dma_device_type": 2 00:20:56.324 } 00:20:56.324 ], 00:20:56.324 "driver_specific": { 00:20:56.324 "raid": { 00:20:56.324 "uuid": "38e2caa9-74b0-4899-ba4b-b51889f9427e", 00:20:56.324 "strip_size_kb": 0, 00:20:56.324 "state": "online", 00:20:56.324 "raid_level": "raid1", 00:20:56.324 "superblock": false, 00:20:56.324 "num_base_bdevs": 4, 00:20:56.324 "num_base_bdevs_discovered": 4, 00:20:56.324 "num_base_bdevs_operational": 4, 00:20:56.324 "base_bdevs_list": [ 00:20:56.324 { 00:20:56.324 "name": "BaseBdev1", 00:20:56.324 "uuid": "fe9355f7-9772-4843-8509-ff5a1010c171", 00:20:56.324 "is_configured": true, 00:20:56.324 "data_offset": 0, 00:20:56.324 "data_size": 65536 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "name": "BaseBdev2", 00:20:56.324 "uuid": "06116ea4-c699-435c-b1d0-984609c10128", 00:20:56.324 "is_configured": true, 00:20:56.324 "data_offset": 0, 00:20:56.324 "data_size": 65536 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "name": "BaseBdev3", 00:20:56.324 "uuid": "4e0ab35a-c075-4807-8d2e-631a4e8bf835", 00:20:56.324 "is_configured": true, 00:20:56.324 "data_offset": 0, 00:20:56.324 "data_size": 65536 00:20:56.324 }, 00:20:56.324 { 00:20:56.324 "name": "BaseBdev4", 00:20:56.324 "uuid": "67dd64b8-f33a-480b-8460-dcf125131bb4", 00:20:56.324 "is_configured": true, 00:20:56.324 "data_offset": 0, 00:20:56.324 "data_size": 65536 00:20:56.324 } 00:20:56.324 ] 00:20:56.324 } 00:20:56.324 } 00:20:56.324 }' 00:20:56.324 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:56.583 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:20:56.583 BaseBdev2 00:20:56.583 BaseBdev3 00:20:56.583 BaseBdev4' 00:20:56.583 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:56.583 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:56.583 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:56.842 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:56.842 "name": "BaseBdev1", 00:20:56.842 "aliases": [ 00:20:56.842 "fe9355f7-9772-4843-8509-ff5a1010c171" 00:20:56.842 ], 00:20:56.842 "product_name": "Malloc disk", 00:20:56.842 "block_size": 512, 00:20:56.842 "num_blocks": 65536, 00:20:56.842 "uuid": "fe9355f7-9772-4843-8509-ff5a1010c171", 00:20:56.842 "assigned_rate_limits": { 00:20:56.842 "rw_ios_per_sec": 0, 00:20:56.842 "rw_mbytes_per_sec": 0, 00:20:56.842 "r_mbytes_per_sec": 0, 00:20:56.842 "w_mbytes_per_sec": 0 00:20:56.842 }, 00:20:56.842 "claimed": true, 00:20:56.842 "claim_type": "exclusive_write", 00:20:56.842 "zoned": false, 00:20:56.842 "supported_io_types": { 00:20:56.842 "read": true, 00:20:56.842 "write": true, 00:20:56.842 "unmap": true, 00:20:56.842 "write_zeroes": true, 00:20:56.842 "flush": true, 00:20:56.842 "reset": true, 00:20:56.842 "compare": false, 00:20:56.842 "compare_and_write": false, 00:20:56.842 "abort": true, 00:20:56.842 "nvme_admin": false, 00:20:56.842 "nvme_io": false 00:20:56.842 }, 00:20:56.842 "memory_domains": [ 00:20:56.842 { 00:20:56.842 "dma_device_id": "system", 00:20:56.842 "dma_device_type": 1 00:20:56.842 }, 00:20:56.842 { 00:20:56.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.842 "dma_device_type": 2 00:20:56.842 } 00:20:56.842 ], 00:20:56.842 "driver_specific": {} 00:20:56.842 }' 00:20:56.842 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:56.842 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:56.842 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:56.842 23:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:56.842 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:56.842 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.842 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:57.102 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:57.102 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.102 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:57.102 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:57.102 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:57.102 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:57.102 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:57.102 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:57.361 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:57.361 "name": "BaseBdev2", 00:20:57.361 "aliases": [ 00:20:57.361 "06116ea4-c699-435c-b1d0-984609c10128" 00:20:57.361 ], 00:20:57.361 "product_name": "Malloc disk", 00:20:57.361 "block_size": 512, 00:20:57.361 "num_blocks": 65536, 00:20:57.361 "uuid": "06116ea4-c699-435c-b1d0-984609c10128", 00:20:57.361 "assigned_rate_limits": { 00:20:57.361 "rw_ios_per_sec": 0, 00:20:57.361 "rw_mbytes_per_sec": 0, 00:20:57.361 "r_mbytes_per_sec": 0, 00:20:57.361 "w_mbytes_per_sec": 0 00:20:57.361 }, 00:20:57.361 "claimed": true, 00:20:57.361 "claim_type": "exclusive_write", 00:20:57.361 "zoned": false, 00:20:57.361 "supported_io_types": { 00:20:57.361 "read": true, 00:20:57.361 "write": true, 00:20:57.361 "unmap": true, 00:20:57.361 "write_zeroes": true, 00:20:57.361 "flush": true, 00:20:57.361 "reset": true, 00:20:57.361 "compare": false, 00:20:57.362 "compare_and_write": false, 00:20:57.362 "abort": true, 00:20:57.362 "nvme_admin": false, 00:20:57.362 "nvme_io": false 00:20:57.362 }, 00:20:57.362 "memory_domains": [ 00:20:57.362 { 00:20:57.362 "dma_device_id": "system", 00:20:57.362 "dma_device_type": 1 00:20:57.362 }, 00:20:57.362 { 00:20:57.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.362 "dma_device_type": 2 00:20:57.362 } 00:20:57.362 ], 00:20:57.362 "driver_specific": {} 00:20:57.362 }' 00:20:57.362 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:57.362 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:57.362 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:57.362 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:57.362 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:57.622 23:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:57.882 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:57.882 "name": "BaseBdev3", 00:20:57.882 "aliases": [ 00:20:57.882 "4e0ab35a-c075-4807-8d2e-631a4e8bf835" 00:20:57.882 ], 00:20:57.882 "product_name": "Malloc disk", 00:20:57.882 "block_size": 512, 00:20:57.882 "num_blocks": 65536, 00:20:57.882 "uuid": "4e0ab35a-c075-4807-8d2e-631a4e8bf835", 00:20:57.882 "assigned_rate_limits": { 00:20:57.882 "rw_ios_per_sec": 0, 00:20:57.882 "rw_mbytes_per_sec": 0, 00:20:57.882 "r_mbytes_per_sec": 0, 00:20:57.882 "w_mbytes_per_sec": 0 00:20:57.882 }, 00:20:57.882 "claimed": true, 00:20:57.882 "claim_type": "exclusive_write", 00:20:57.882 "zoned": false, 00:20:57.882 "supported_io_types": { 00:20:57.882 "read": true, 00:20:57.882 "write": true, 00:20:57.882 "unmap": true, 00:20:57.882 "write_zeroes": true, 00:20:57.882 "flush": true, 00:20:57.882 "reset": true, 00:20:57.882 "compare": false, 00:20:57.882 "compare_and_write": false, 00:20:57.882 "abort": true, 00:20:57.882 "nvme_admin": false, 00:20:57.882 "nvme_io": false 00:20:57.882 }, 00:20:57.882 "memory_domains": [ 00:20:57.882 { 00:20:57.882 "dma_device_id": "system", 00:20:57.882 "dma_device_type": 1 00:20:57.882 }, 00:20:57.882 { 00:20:57.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.882 "dma_device_type": 2 00:20:57.882 } 00:20:57.882 ], 00:20:57.882 "driver_specific": {} 00:20:57.882 }' 00:20:57.882 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:57.882 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:57.882 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:57.882 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:58.206 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:58.206 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.206 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:58.206 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:58.206 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.206 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:58.206 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:58.465 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:58.465 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:58.465 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:58.465 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:58.465 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:58.465 "name": "BaseBdev4", 00:20:58.465 "aliases": [ 00:20:58.465 "67dd64b8-f33a-480b-8460-dcf125131bb4" 00:20:58.465 ], 00:20:58.465 "product_name": "Malloc disk", 00:20:58.465 "block_size": 512, 00:20:58.465 "num_blocks": 65536, 00:20:58.465 "uuid": "67dd64b8-f33a-480b-8460-dcf125131bb4", 00:20:58.465 "assigned_rate_limits": { 00:20:58.465 "rw_ios_per_sec": 0, 00:20:58.465 "rw_mbytes_per_sec": 0, 00:20:58.465 "r_mbytes_per_sec": 0, 00:20:58.465 "w_mbytes_per_sec": 0 00:20:58.465 }, 00:20:58.465 "claimed": true, 00:20:58.465 "claim_type": "exclusive_write", 00:20:58.465 "zoned": false, 00:20:58.465 "supported_io_types": { 00:20:58.465 "read": true, 00:20:58.465 "write": true, 00:20:58.465 "unmap": true, 00:20:58.465 "write_zeroes": true, 00:20:58.465 "flush": true, 00:20:58.465 "reset": true, 00:20:58.465 "compare": false, 00:20:58.465 "compare_and_write": false, 00:20:58.465 "abort": true, 00:20:58.465 "nvme_admin": false, 00:20:58.465 "nvme_io": false 00:20:58.465 }, 00:20:58.465 "memory_domains": [ 00:20:58.465 { 00:20:58.465 "dma_device_id": "system", 00:20:58.465 "dma_device_type": 1 00:20:58.465 }, 00:20:58.465 { 00:20:58.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.465 "dma_device_type": 2 00:20:58.465 } 00:20:58.465 ], 00:20:58.465 "driver_specific": {} 00:20:58.465 }' 00:20:58.465 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:58.724 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:58.725 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:58.725 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:58.725 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:58.725 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.725 23:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:58.983 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:58.983 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.983 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:58.983 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:58.983 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:58.983 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:59.243 [2024-05-14 23:36:22.414445] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.243 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.503 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.503 "name": "Existed_Raid", 00:20:59.503 "uuid": "38e2caa9-74b0-4899-ba4b-b51889f9427e", 00:20:59.503 "strip_size_kb": 0, 00:20:59.503 "state": "online", 00:20:59.503 "raid_level": "raid1", 00:20:59.503 "superblock": false, 00:20:59.503 "num_base_bdevs": 4, 00:20:59.503 "num_base_bdevs_discovered": 3, 00:20:59.503 "num_base_bdevs_operational": 3, 00:20:59.503 "base_bdevs_list": [ 00:20:59.503 { 00:20:59.503 "name": null, 00:20:59.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.503 "is_configured": false, 00:20:59.503 "data_offset": 0, 00:20:59.504 "data_size": 65536 00:20:59.504 }, 00:20:59.504 { 00:20:59.504 "name": "BaseBdev2", 00:20:59.504 "uuid": "06116ea4-c699-435c-b1d0-984609c10128", 00:20:59.504 "is_configured": true, 00:20:59.504 "data_offset": 0, 00:20:59.504 "data_size": 65536 00:20:59.504 }, 00:20:59.504 { 00:20:59.504 "name": "BaseBdev3", 00:20:59.504 "uuid": "4e0ab35a-c075-4807-8d2e-631a4e8bf835", 00:20:59.504 "is_configured": true, 00:20:59.504 "data_offset": 0, 00:20:59.504 "data_size": 65536 00:20:59.504 }, 00:20:59.504 { 00:20:59.504 "name": "BaseBdev4", 00:20:59.504 "uuid": "67dd64b8-f33a-480b-8460-dcf125131bb4", 00:20:59.504 "is_configured": true, 00:20:59.504 "data_offset": 0, 00:20:59.504 "data_size": 65536 00:20:59.504 } 00:20:59.504 ] 00:20:59.504 }' 00:20:59.504 23:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.504 23:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.442 23:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:00.442 23:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:00.442 23:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.442 23:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:00.442 23:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:00.442 23:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:00.442 23:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:00.701 [2024-05-14 23:36:23.935077] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:00.960 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:00.960 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:00.960 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.960 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:00.960 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:00.960 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:00.960 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:01.219 [2024-05-14 23:36:24.438576] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:01.479 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:01.479 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:01.479 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:01.479 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.479 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:01.479 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:01.479 23:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:01.820 [2024-05-14 23:36:24.966313] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:01.820 [2024-05-14 23:36:24.966392] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.820 [2024-05-14 23:36:25.051618] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.820 [2024-05-14 23:36:25.051718] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.820 [2024-05-14 23:36:25.051739] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:21:01.820 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:01.820 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:01.820 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.820 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:21:02.081 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:21:02.081 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:21:02.081 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:21:02.081 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:21:02.081 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:02.081 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:02.340 BaseBdev2 00:21:02.340 23:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:21:02.340 23:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:02.340 23:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:02.340 23:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:02.340 23:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:02.340 23:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:02.340 23:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:02.599 23:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:02.858 [ 00:21:02.858 { 00:21:02.858 "name": "BaseBdev2", 00:21:02.858 "aliases": [ 00:21:02.858 "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e" 00:21:02.858 ], 00:21:02.858 "product_name": "Malloc disk", 00:21:02.858 "block_size": 512, 00:21:02.858 "num_blocks": 65536, 00:21:02.858 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:02.859 "assigned_rate_limits": { 00:21:02.859 "rw_ios_per_sec": 0, 00:21:02.859 "rw_mbytes_per_sec": 0, 00:21:02.859 "r_mbytes_per_sec": 0, 00:21:02.859 "w_mbytes_per_sec": 0 00:21:02.859 }, 00:21:02.859 "claimed": false, 00:21:02.859 "zoned": false, 00:21:02.859 "supported_io_types": { 00:21:02.859 "read": true, 00:21:02.859 "write": true, 00:21:02.859 "unmap": true, 00:21:02.859 "write_zeroes": true, 00:21:02.859 "flush": true, 00:21:02.859 "reset": true, 00:21:02.859 "compare": false, 00:21:02.859 "compare_and_write": false, 00:21:02.859 "abort": true, 00:21:02.859 "nvme_admin": false, 00:21:02.859 "nvme_io": false 00:21:02.859 }, 00:21:02.859 "memory_domains": [ 00:21:02.859 { 00:21:02.859 "dma_device_id": "system", 00:21:02.859 "dma_device_type": 1 00:21:02.859 }, 00:21:02.859 { 00:21:02.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.859 "dma_device_type": 2 00:21:02.859 } 00:21:02.859 ], 00:21:02.859 "driver_specific": {} 00:21:02.859 } 00:21:02.859 ] 00:21:02.859 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:02.859 23:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:02.859 23:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:02.859 23:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:03.118 BaseBdev3 00:21:03.118 23:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:21:03.118 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:03.118 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:03.118 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:03.118 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:03.118 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:03.118 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:03.378 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:03.637 [ 00:21:03.637 { 00:21:03.637 "name": "BaseBdev3", 00:21:03.637 "aliases": [ 00:21:03.637 "a4989f34-4ecd-4a65-a9cb-afcc40b69727" 00:21:03.637 ], 00:21:03.637 "product_name": "Malloc disk", 00:21:03.637 "block_size": 512, 00:21:03.638 "num_blocks": 65536, 00:21:03.638 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:03.638 "assigned_rate_limits": { 00:21:03.638 "rw_ios_per_sec": 0, 00:21:03.638 "rw_mbytes_per_sec": 0, 00:21:03.638 "r_mbytes_per_sec": 0, 00:21:03.638 "w_mbytes_per_sec": 0 00:21:03.638 }, 00:21:03.638 "claimed": false, 00:21:03.638 "zoned": false, 00:21:03.638 "supported_io_types": { 00:21:03.638 "read": true, 00:21:03.638 "write": true, 00:21:03.638 "unmap": true, 00:21:03.638 "write_zeroes": true, 00:21:03.638 "flush": true, 00:21:03.638 "reset": true, 00:21:03.638 "compare": false, 00:21:03.638 "compare_and_write": false, 00:21:03.638 "abort": true, 00:21:03.638 "nvme_admin": false, 00:21:03.638 "nvme_io": false 00:21:03.638 }, 00:21:03.638 "memory_domains": [ 00:21:03.638 { 00:21:03.638 "dma_device_id": "system", 00:21:03.638 "dma_device_type": 1 00:21:03.638 }, 00:21:03.638 { 00:21:03.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.638 "dma_device_type": 2 00:21:03.638 } 00:21:03.638 ], 00:21:03.638 "driver_specific": {} 00:21:03.638 } 00:21:03.638 ] 00:21:03.638 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:03.638 23:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:03.638 23:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:03.638 23:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:03.897 BaseBdev4 00:21:03.897 23:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:21:03.897 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:03.897 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:03.897 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:03.897 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:03.897 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:03.897 23:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:03.897 23:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:04.157 [ 00:21:04.157 { 00:21:04.157 "name": "BaseBdev4", 00:21:04.157 "aliases": [ 00:21:04.157 "97e44532-f69f-42dc-aa76-34441b613f1d" 00:21:04.157 ], 00:21:04.157 "product_name": "Malloc disk", 00:21:04.157 "block_size": 512, 00:21:04.157 "num_blocks": 65536, 00:21:04.157 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:04.157 "assigned_rate_limits": { 00:21:04.157 "rw_ios_per_sec": 0, 00:21:04.157 "rw_mbytes_per_sec": 0, 00:21:04.157 "r_mbytes_per_sec": 0, 00:21:04.157 "w_mbytes_per_sec": 0 00:21:04.157 }, 00:21:04.157 "claimed": false, 00:21:04.157 "zoned": false, 00:21:04.157 "supported_io_types": { 00:21:04.157 "read": true, 00:21:04.157 "write": true, 00:21:04.157 "unmap": true, 00:21:04.157 "write_zeroes": true, 00:21:04.157 "flush": true, 00:21:04.157 "reset": true, 00:21:04.157 "compare": false, 00:21:04.157 "compare_and_write": false, 00:21:04.157 "abort": true, 00:21:04.157 "nvme_admin": false, 00:21:04.157 "nvme_io": false 00:21:04.157 }, 00:21:04.157 "memory_domains": [ 00:21:04.157 { 00:21:04.157 "dma_device_id": "system", 00:21:04.157 "dma_device_type": 1 00:21:04.157 }, 00:21:04.157 { 00:21:04.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.157 "dma_device_type": 2 00:21:04.157 } 00:21:04.157 ], 00:21:04.157 "driver_specific": {} 00:21:04.157 } 00:21:04.157 ] 00:21:04.157 23:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:04.157 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:04.157 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:04.157 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:04.418 [2024-05-14 23:36:27.577490] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:04.418 [2024-05-14 23:36:27.577591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:04.418 [2024-05-14 23:36:27.577633] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.418 [2024-05-14 23:36:27.579144] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:04.418 [2024-05-14 23:36:27.579204] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.418 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.678 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.678 "name": "Existed_Raid", 00:21:04.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.678 "strip_size_kb": 0, 00:21:04.678 "state": "configuring", 00:21:04.678 "raid_level": "raid1", 00:21:04.678 "superblock": false, 00:21:04.678 "num_base_bdevs": 4, 00:21:04.678 "num_base_bdevs_discovered": 3, 00:21:04.678 "num_base_bdevs_operational": 4, 00:21:04.678 "base_bdevs_list": [ 00:21:04.678 { 00:21:04.678 "name": "BaseBdev1", 00:21:04.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.678 "is_configured": false, 00:21:04.678 "data_offset": 0, 00:21:04.678 "data_size": 0 00:21:04.678 }, 00:21:04.678 { 00:21:04.678 "name": "BaseBdev2", 00:21:04.678 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:04.678 "is_configured": true, 00:21:04.678 "data_offset": 0, 00:21:04.678 "data_size": 65536 00:21:04.678 }, 00:21:04.678 { 00:21:04.678 "name": "BaseBdev3", 00:21:04.678 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:04.678 "is_configured": true, 00:21:04.678 "data_offset": 0, 00:21:04.678 "data_size": 65536 00:21:04.678 }, 00:21:04.678 { 00:21:04.678 "name": "BaseBdev4", 00:21:04.678 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:04.678 "is_configured": true, 00:21:04.678 "data_offset": 0, 00:21:04.678 "data_size": 65536 00:21:04.678 } 00:21:04.678 ] 00:21:04.678 }' 00:21:04.678 23:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.678 23:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.245 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:05.504 [2024-05-14 23:36:28.634513] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.504 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.764 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.764 "name": "Existed_Raid", 00:21:05.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.764 "strip_size_kb": 0, 00:21:05.764 "state": "configuring", 00:21:05.764 "raid_level": "raid1", 00:21:05.764 "superblock": false, 00:21:05.764 "num_base_bdevs": 4, 00:21:05.764 "num_base_bdevs_discovered": 2, 00:21:05.764 "num_base_bdevs_operational": 4, 00:21:05.764 "base_bdevs_list": [ 00:21:05.764 { 00:21:05.764 "name": "BaseBdev1", 00:21:05.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.764 "is_configured": false, 00:21:05.764 "data_offset": 0, 00:21:05.764 "data_size": 0 00:21:05.764 }, 00:21:05.764 { 00:21:05.764 "name": null, 00:21:05.764 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:05.764 "is_configured": false, 00:21:05.764 "data_offset": 0, 00:21:05.764 "data_size": 65536 00:21:05.764 }, 00:21:05.764 { 00:21:05.764 "name": "BaseBdev3", 00:21:05.764 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:05.764 "is_configured": true, 00:21:05.764 "data_offset": 0, 00:21:05.764 "data_size": 65536 00:21:05.764 }, 00:21:05.764 { 00:21:05.764 "name": "BaseBdev4", 00:21:05.764 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:05.764 "is_configured": true, 00:21:05.764 "data_offset": 0, 00:21:05.764 "data_size": 65536 00:21:05.764 } 00:21:05.764 ] 00:21:05.764 }' 00:21:05.764 23:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.764 23:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.332 23:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.332 23:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:06.591 23:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:21:06.591 23:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:06.850 [2024-05-14 23:36:29.981842] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.850 BaseBdev1 00:21:06.851 23:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:21:06.851 23:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:06.851 23:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:06.851 23:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:06.851 23:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:06.851 23:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:06.851 23:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:07.109 23:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:07.368 [ 00:21:07.368 { 00:21:07.368 "name": "BaseBdev1", 00:21:07.368 "aliases": [ 00:21:07.368 "294fce80-3b65-4de0-8fb0-a6b557755e21" 00:21:07.368 ], 00:21:07.368 "product_name": "Malloc disk", 00:21:07.368 "block_size": 512, 00:21:07.368 "num_blocks": 65536, 00:21:07.368 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:07.368 "assigned_rate_limits": { 00:21:07.368 "rw_ios_per_sec": 0, 00:21:07.368 "rw_mbytes_per_sec": 0, 00:21:07.368 "r_mbytes_per_sec": 0, 00:21:07.368 "w_mbytes_per_sec": 0 00:21:07.368 }, 00:21:07.368 "claimed": true, 00:21:07.368 "claim_type": "exclusive_write", 00:21:07.368 "zoned": false, 00:21:07.368 "supported_io_types": { 00:21:07.368 "read": true, 00:21:07.368 "write": true, 00:21:07.368 "unmap": true, 00:21:07.368 "write_zeroes": true, 00:21:07.368 "flush": true, 00:21:07.368 "reset": true, 00:21:07.368 "compare": false, 00:21:07.368 "compare_and_write": false, 00:21:07.368 "abort": true, 00:21:07.368 "nvme_admin": false, 00:21:07.368 "nvme_io": false 00:21:07.368 }, 00:21:07.368 "memory_domains": [ 00:21:07.368 { 00:21:07.368 "dma_device_id": "system", 00:21:07.368 "dma_device_type": 1 00:21:07.368 }, 00:21:07.368 { 00:21:07.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.368 "dma_device_type": 2 00:21:07.368 } 00:21:07.368 ], 00:21:07.368 "driver_specific": {} 00:21:07.368 } 00:21:07.368 ] 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.368 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.368 "name": "Existed_Raid", 00:21:07.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.368 "strip_size_kb": 0, 00:21:07.368 "state": "configuring", 00:21:07.368 "raid_level": "raid1", 00:21:07.368 "superblock": false, 00:21:07.368 "num_base_bdevs": 4, 00:21:07.368 "num_base_bdevs_discovered": 3, 00:21:07.368 "num_base_bdevs_operational": 4, 00:21:07.368 "base_bdevs_list": [ 00:21:07.368 { 00:21:07.368 "name": "BaseBdev1", 00:21:07.368 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:07.368 "is_configured": true, 00:21:07.368 "data_offset": 0, 00:21:07.368 "data_size": 65536 00:21:07.368 }, 00:21:07.368 { 00:21:07.368 "name": null, 00:21:07.368 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:07.368 "is_configured": false, 00:21:07.368 "data_offset": 0, 00:21:07.368 "data_size": 65536 00:21:07.368 }, 00:21:07.368 { 00:21:07.368 "name": "BaseBdev3", 00:21:07.368 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:07.368 "is_configured": true, 00:21:07.368 "data_offset": 0, 00:21:07.368 "data_size": 65536 00:21:07.368 }, 00:21:07.368 { 00:21:07.368 "name": "BaseBdev4", 00:21:07.368 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:07.368 "is_configured": true, 00:21:07.369 "data_offset": 0, 00:21:07.369 "data_size": 65536 00:21:07.369 } 00:21:07.369 ] 00:21:07.369 }' 00:21:07.369 23:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.369 23:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.304 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.304 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:08.304 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:08.304 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:08.619 [2024-05-14 23:36:31.698153] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.619 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.878 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.878 "name": "Existed_Raid", 00:21:08.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.878 "strip_size_kb": 0, 00:21:08.878 "state": "configuring", 00:21:08.878 "raid_level": "raid1", 00:21:08.878 "superblock": false, 00:21:08.878 "num_base_bdevs": 4, 00:21:08.878 "num_base_bdevs_discovered": 2, 00:21:08.878 "num_base_bdevs_operational": 4, 00:21:08.878 "base_bdevs_list": [ 00:21:08.878 { 00:21:08.878 "name": "BaseBdev1", 00:21:08.878 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:08.878 "is_configured": true, 00:21:08.878 "data_offset": 0, 00:21:08.878 "data_size": 65536 00:21:08.878 }, 00:21:08.878 { 00:21:08.878 "name": null, 00:21:08.878 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:08.878 "is_configured": false, 00:21:08.878 "data_offset": 0, 00:21:08.878 "data_size": 65536 00:21:08.878 }, 00:21:08.878 { 00:21:08.878 "name": null, 00:21:08.878 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:08.878 "is_configured": false, 00:21:08.878 "data_offset": 0, 00:21:08.878 "data_size": 65536 00:21:08.878 }, 00:21:08.878 { 00:21:08.878 "name": "BaseBdev4", 00:21:08.878 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:08.878 "is_configured": true, 00:21:08.878 "data_offset": 0, 00:21:08.878 "data_size": 65536 00:21:08.878 } 00:21:08.878 ] 00:21:08.878 }' 00:21:08.878 23:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.878 23:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.444 23:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.444 23:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:09.703 23:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:21:09.703 23:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:09.962 [2024-05-14 23:36:33.010458] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:09.962 "name": "Existed_Raid", 00:21:09.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.962 "strip_size_kb": 0, 00:21:09.962 "state": "configuring", 00:21:09.962 "raid_level": "raid1", 00:21:09.962 "superblock": false, 00:21:09.962 "num_base_bdevs": 4, 00:21:09.962 "num_base_bdevs_discovered": 3, 00:21:09.962 "num_base_bdevs_operational": 4, 00:21:09.962 "base_bdevs_list": [ 00:21:09.962 { 00:21:09.962 "name": "BaseBdev1", 00:21:09.962 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:09.962 "is_configured": true, 00:21:09.962 "data_offset": 0, 00:21:09.962 "data_size": 65536 00:21:09.962 }, 00:21:09.962 { 00:21:09.962 "name": null, 00:21:09.962 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:09.962 "is_configured": false, 00:21:09.962 "data_offset": 0, 00:21:09.962 "data_size": 65536 00:21:09.962 }, 00:21:09.962 { 00:21:09.962 "name": "BaseBdev3", 00:21:09.962 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:09.962 "is_configured": true, 00:21:09.962 "data_offset": 0, 00:21:09.962 "data_size": 65536 00:21:09.962 }, 00:21:09.962 { 00:21:09.962 "name": "BaseBdev4", 00:21:09.962 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:09.962 "is_configured": true, 00:21:09.962 "data_offset": 0, 00:21:09.962 "data_size": 65536 00:21:09.962 } 00:21:09.962 ] 00:21:09.962 }' 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:09.962 23:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.897 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.897 23:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:10.897 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:21:10.897 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:11.156 [2024-05-14 23:36:34.346650] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.156 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.415 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.415 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.415 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.415 "name": "Existed_Raid", 00:21:11.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.415 "strip_size_kb": 0, 00:21:11.415 "state": "configuring", 00:21:11.415 "raid_level": "raid1", 00:21:11.415 "superblock": false, 00:21:11.415 "num_base_bdevs": 4, 00:21:11.415 "num_base_bdevs_discovered": 2, 00:21:11.415 "num_base_bdevs_operational": 4, 00:21:11.415 "base_bdevs_list": [ 00:21:11.415 { 00:21:11.415 "name": null, 00:21:11.415 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:11.415 "is_configured": false, 00:21:11.415 "data_offset": 0, 00:21:11.415 "data_size": 65536 00:21:11.415 }, 00:21:11.415 { 00:21:11.415 "name": null, 00:21:11.415 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:11.415 "is_configured": false, 00:21:11.415 "data_offset": 0, 00:21:11.415 "data_size": 65536 00:21:11.415 }, 00:21:11.415 { 00:21:11.415 "name": "BaseBdev3", 00:21:11.415 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:11.415 "is_configured": true, 00:21:11.415 "data_offset": 0, 00:21:11.415 "data_size": 65536 00:21:11.415 }, 00:21:11.415 { 00:21:11.415 "name": "BaseBdev4", 00:21:11.415 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:11.415 "is_configured": true, 00:21:11.415 "data_offset": 0, 00:21:11.415 "data_size": 65536 00:21:11.415 } 00:21:11.415 ] 00:21:11.415 }' 00:21:11.415 23:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.415 23:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.351 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.351 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:12.351 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:21:12.351 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:12.611 [2024-05-14 23:36:35.806259] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.611 23:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.870 23:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:12.870 "name": "Existed_Raid", 00:21:12.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.870 "strip_size_kb": 0, 00:21:12.870 "state": "configuring", 00:21:12.870 "raid_level": "raid1", 00:21:12.870 "superblock": false, 00:21:12.870 "num_base_bdevs": 4, 00:21:12.870 "num_base_bdevs_discovered": 3, 00:21:12.870 "num_base_bdevs_operational": 4, 00:21:12.870 "base_bdevs_list": [ 00:21:12.870 { 00:21:12.870 "name": null, 00:21:12.870 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:12.870 "is_configured": false, 00:21:12.870 "data_offset": 0, 00:21:12.870 "data_size": 65536 00:21:12.870 }, 00:21:12.870 { 00:21:12.870 "name": "BaseBdev2", 00:21:12.870 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:12.870 "is_configured": true, 00:21:12.870 "data_offset": 0, 00:21:12.870 "data_size": 65536 00:21:12.870 }, 00:21:12.870 { 00:21:12.870 "name": "BaseBdev3", 00:21:12.870 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:12.870 "is_configured": true, 00:21:12.870 "data_offset": 0, 00:21:12.870 "data_size": 65536 00:21:12.870 }, 00:21:12.870 { 00:21:12.870 "name": "BaseBdev4", 00:21:12.870 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:12.870 "is_configured": true, 00:21:12.870 "data_offset": 0, 00:21:12.870 "data_size": 65536 00:21:12.870 } 00:21:12.870 ] 00:21:12.870 }' 00:21:12.870 23:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:12.870 23:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.437 23:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.437 23:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:13.696 23:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:21:13.696 23:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.696 23:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:13.955 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 294fce80-3b65-4de0-8fb0-a6b557755e21 00:21:14.214 [2024-05-14 23:36:37.365805] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:14.214 [2024-05-14 23:36:37.365863] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:21:14.214 [2024-05-14 23:36:37.365874] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:14.214 [2024-05-14 23:36:37.365998] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:14.214 NewBaseBdev 00:21:14.214 [2024-05-14 23:36:37.366477] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:21:14.214 [2024-05-14 23:36:37.366499] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:21:14.214 [2024-05-14 23:36:37.366696] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.214 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:21:14.214 23:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:21:14.214 23:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:14.214 23:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:14.214 23:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:14.214 23:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:14.214 23:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.473 23:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:14.763 [ 00:21:14.763 { 00:21:14.763 "name": "NewBaseBdev", 00:21:14.763 "aliases": [ 00:21:14.763 "294fce80-3b65-4de0-8fb0-a6b557755e21" 00:21:14.763 ], 00:21:14.763 "product_name": "Malloc disk", 00:21:14.763 "block_size": 512, 00:21:14.763 "num_blocks": 65536, 00:21:14.763 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:14.763 "assigned_rate_limits": { 00:21:14.763 "rw_ios_per_sec": 0, 00:21:14.763 "rw_mbytes_per_sec": 0, 00:21:14.763 "r_mbytes_per_sec": 0, 00:21:14.763 "w_mbytes_per_sec": 0 00:21:14.763 }, 00:21:14.763 "claimed": true, 00:21:14.763 "claim_type": "exclusive_write", 00:21:14.763 "zoned": false, 00:21:14.763 "supported_io_types": { 00:21:14.763 "read": true, 00:21:14.763 "write": true, 00:21:14.763 "unmap": true, 00:21:14.763 "write_zeroes": true, 00:21:14.763 "flush": true, 00:21:14.763 "reset": true, 00:21:14.763 "compare": false, 00:21:14.763 "compare_and_write": false, 00:21:14.763 "abort": true, 00:21:14.763 "nvme_admin": false, 00:21:14.763 "nvme_io": false 00:21:14.763 }, 00:21:14.763 "memory_domains": [ 00:21:14.763 { 00:21:14.763 "dma_device_id": "system", 00:21:14.763 "dma_device_type": 1 00:21:14.763 }, 00:21:14.763 { 00:21:14.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.763 "dma_device_type": 2 00:21:14.763 } 00:21:14.763 ], 00:21:14.763 "driver_specific": {} 00:21:14.763 } 00:21:14.763 ] 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.763 23:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.023 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:15.023 "name": "Existed_Raid", 00:21:15.023 "uuid": "f57a24b0-e1f6-41e3-9575-378987f58458", 00:21:15.023 "strip_size_kb": 0, 00:21:15.023 "state": "online", 00:21:15.023 "raid_level": "raid1", 00:21:15.023 "superblock": false, 00:21:15.023 "num_base_bdevs": 4, 00:21:15.023 "num_base_bdevs_discovered": 4, 00:21:15.023 "num_base_bdevs_operational": 4, 00:21:15.023 "base_bdevs_list": [ 00:21:15.023 { 00:21:15.023 "name": "NewBaseBdev", 00:21:15.023 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:15.023 "is_configured": true, 00:21:15.023 "data_offset": 0, 00:21:15.023 "data_size": 65536 00:21:15.023 }, 00:21:15.023 { 00:21:15.023 "name": "BaseBdev2", 00:21:15.023 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:15.023 "is_configured": true, 00:21:15.023 "data_offset": 0, 00:21:15.023 "data_size": 65536 00:21:15.023 }, 00:21:15.023 { 00:21:15.023 "name": "BaseBdev3", 00:21:15.023 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:15.023 "is_configured": true, 00:21:15.023 "data_offset": 0, 00:21:15.023 "data_size": 65536 00:21:15.023 }, 00:21:15.023 { 00:21:15.023 "name": "BaseBdev4", 00:21:15.023 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:15.023 "is_configured": true, 00:21:15.023 "data_offset": 0, 00:21:15.023 "data_size": 65536 00:21:15.023 } 00:21:15.023 ] 00:21:15.023 }' 00:21:15.023 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:15.023 23:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:15.592 [2024-05-14 23:36:38.842251] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:15.592 "name": "Existed_Raid", 00:21:15.592 "aliases": [ 00:21:15.592 "f57a24b0-e1f6-41e3-9575-378987f58458" 00:21:15.592 ], 00:21:15.592 "product_name": "Raid Volume", 00:21:15.592 "block_size": 512, 00:21:15.592 "num_blocks": 65536, 00:21:15.592 "uuid": "f57a24b0-e1f6-41e3-9575-378987f58458", 00:21:15.592 "assigned_rate_limits": { 00:21:15.592 "rw_ios_per_sec": 0, 00:21:15.592 "rw_mbytes_per_sec": 0, 00:21:15.592 "r_mbytes_per_sec": 0, 00:21:15.592 "w_mbytes_per_sec": 0 00:21:15.592 }, 00:21:15.592 "claimed": false, 00:21:15.592 "zoned": false, 00:21:15.592 "supported_io_types": { 00:21:15.592 "read": true, 00:21:15.592 "write": true, 00:21:15.592 "unmap": false, 00:21:15.592 "write_zeroes": true, 00:21:15.592 "flush": false, 00:21:15.592 "reset": true, 00:21:15.592 "compare": false, 00:21:15.592 "compare_and_write": false, 00:21:15.592 "abort": false, 00:21:15.592 "nvme_admin": false, 00:21:15.592 "nvme_io": false 00:21:15.592 }, 00:21:15.592 "memory_domains": [ 00:21:15.592 { 00:21:15.592 "dma_device_id": "system", 00:21:15.592 "dma_device_type": 1 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.592 "dma_device_type": 2 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "dma_device_id": "system", 00:21:15.592 "dma_device_type": 1 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.592 "dma_device_type": 2 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "dma_device_id": "system", 00:21:15.592 "dma_device_type": 1 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.592 "dma_device_type": 2 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "dma_device_id": "system", 00:21:15.592 "dma_device_type": 1 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.592 "dma_device_type": 2 00:21:15.592 } 00:21:15.592 ], 00:21:15.592 "driver_specific": { 00:21:15.592 "raid": { 00:21:15.592 "uuid": "f57a24b0-e1f6-41e3-9575-378987f58458", 00:21:15.592 "strip_size_kb": 0, 00:21:15.592 "state": "online", 00:21:15.592 "raid_level": "raid1", 00:21:15.592 "superblock": false, 00:21:15.592 "num_base_bdevs": 4, 00:21:15.592 "num_base_bdevs_discovered": 4, 00:21:15.592 "num_base_bdevs_operational": 4, 00:21:15.592 "base_bdevs_list": [ 00:21:15.592 { 00:21:15.592 "name": "NewBaseBdev", 00:21:15.592 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:15.592 "is_configured": true, 00:21:15.592 "data_offset": 0, 00:21:15.592 "data_size": 65536 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "name": "BaseBdev2", 00:21:15.592 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:15.592 "is_configured": true, 00:21:15.592 "data_offset": 0, 00:21:15.592 "data_size": 65536 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "name": "BaseBdev3", 00:21:15.592 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:15.592 "is_configured": true, 00:21:15.592 "data_offset": 0, 00:21:15.592 "data_size": 65536 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "name": "BaseBdev4", 00:21:15.592 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:15.592 "is_configured": true, 00:21:15.592 "data_offset": 0, 00:21:15.592 "data_size": 65536 00:21:15.592 } 00:21:15.592 ] 00:21:15.592 } 00:21:15.592 } 00:21:15.592 }' 00:21:15.592 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.850 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:21:15.850 BaseBdev2 00:21:15.850 BaseBdev3 00:21:15.850 BaseBdev4' 00:21:15.850 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:15.850 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:15.850 23:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:16.108 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:16.108 "name": "NewBaseBdev", 00:21:16.108 "aliases": [ 00:21:16.108 "294fce80-3b65-4de0-8fb0-a6b557755e21" 00:21:16.108 ], 00:21:16.108 "product_name": "Malloc disk", 00:21:16.108 "block_size": 512, 00:21:16.108 "num_blocks": 65536, 00:21:16.108 "uuid": "294fce80-3b65-4de0-8fb0-a6b557755e21", 00:21:16.108 "assigned_rate_limits": { 00:21:16.108 "rw_ios_per_sec": 0, 00:21:16.108 "rw_mbytes_per_sec": 0, 00:21:16.108 "r_mbytes_per_sec": 0, 00:21:16.108 "w_mbytes_per_sec": 0 00:21:16.108 }, 00:21:16.108 "claimed": true, 00:21:16.108 "claim_type": "exclusive_write", 00:21:16.108 "zoned": false, 00:21:16.108 "supported_io_types": { 00:21:16.108 "read": true, 00:21:16.108 "write": true, 00:21:16.108 "unmap": true, 00:21:16.108 "write_zeroes": true, 00:21:16.108 "flush": true, 00:21:16.108 "reset": true, 00:21:16.108 "compare": false, 00:21:16.108 "compare_and_write": false, 00:21:16.108 "abort": true, 00:21:16.108 "nvme_admin": false, 00:21:16.108 "nvme_io": false 00:21:16.108 }, 00:21:16.108 "memory_domains": [ 00:21:16.108 { 00:21:16.108 "dma_device_id": "system", 00:21:16.108 "dma_device_type": 1 00:21:16.108 }, 00:21:16.108 { 00:21:16.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.108 "dma_device_type": 2 00:21:16.108 } 00:21:16.108 ], 00:21:16.108 "driver_specific": {} 00:21:16.108 }' 00:21:16.108 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:16.108 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:16.108 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:16.108 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:16.108 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:16.108 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.108 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:16.366 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:16.366 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.366 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:16.366 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:16.366 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:16.366 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:16.366 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:16.366 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:16.623 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:16.623 "name": "BaseBdev2", 00:21:16.623 "aliases": [ 00:21:16.623 "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e" 00:21:16.623 ], 00:21:16.623 "product_name": "Malloc disk", 00:21:16.623 "block_size": 512, 00:21:16.623 "num_blocks": 65536, 00:21:16.623 "uuid": "3e5b3f37-13a4-4445-b4f7-b26273bb5b7e", 00:21:16.623 "assigned_rate_limits": { 00:21:16.623 "rw_ios_per_sec": 0, 00:21:16.623 "rw_mbytes_per_sec": 0, 00:21:16.623 "r_mbytes_per_sec": 0, 00:21:16.623 "w_mbytes_per_sec": 0 00:21:16.623 }, 00:21:16.623 "claimed": true, 00:21:16.623 "claim_type": "exclusive_write", 00:21:16.623 "zoned": false, 00:21:16.623 "supported_io_types": { 00:21:16.623 "read": true, 00:21:16.623 "write": true, 00:21:16.623 "unmap": true, 00:21:16.623 "write_zeroes": true, 00:21:16.623 "flush": true, 00:21:16.623 "reset": true, 00:21:16.623 "compare": false, 00:21:16.623 "compare_and_write": false, 00:21:16.623 "abort": true, 00:21:16.623 "nvme_admin": false, 00:21:16.623 "nvme_io": false 00:21:16.623 }, 00:21:16.623 "memory_domains": [ 00:21:16.623 { 00:21:16.623 "dma_device_id": "system", 00:21:16.623 "dma_device_type": 1 00:21:16.623 }, 00:21:16.623 { 00:21:16.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.623 "dma_device_type": 2 00:21:16.623 } 00:21:16.623 ], 00:21:16.623 "driver_specific": {} 00:21:16.623 }' 00:21:16.623 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:16.881 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:16.881 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:16.881 23:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:16.881 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:16.881 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.881 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:17.138 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:17.138 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:17.138 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:17.138 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:17.138 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:17.138 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:17.138 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:17.138 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:17.397 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:17.397 "name": "BaseBdev3", 00:21:17.397 "aliases": [ 00:21:17.397 "a4989f34-4ecd-4a65-a9cb-afcc40b69727" 00:21:17.397 ], 00:21:17.397 "product_name": "Malloc disk", 00:21:17.397 "block_size": 512, 00:21:17.397 "num_blocks": 65536, 00:21:17.397 "uuid": "a4989f34-4ecd-4a65-a9cb-afcc40b69727", 00:21:17.397 "assigned_rate_limits": { 00:21:17.397 "rw_ios_per_sec": 0, 00:21:17.397 "rw_mbytes_per_sec": 0, 00:21:17.397 "r_mbytes_per_sec": 0, 00:21:17.397 "w_mbytes_per_sec": 0 00:21:17.397 }, 00:21:17.397 "claimed": true, 00:21:17.397 "claim_type": "exclusive_write", 00:21:17.397 "zoned": false, 00:21:17.397 "supported_io_types": { 00:21:17.397 "read": true, 00:21:17.397 "write": true, 00:21:17.397 "unmap": true, 00:21:17.397 "write_zeroes": true, 00:21:17.397 "flush": true, 00:21:17.397 "reset": true, 00:21:17.397 "compare": false, 00:21:17.397 "compare_and_write": false, 00:21:17.397 "abort": true, 00:21:17.397 "nvme_admin": false, 00:21:17.397 "nvme_io": false 00:21:17.397 }, 00:21:17.397 "memory_domains": [ 00:21:17.397 { 00:21:17.397 "dma_device_id": "system", 00:21:17.397 "dma_device_type": 1 00:21:17.397 }, 00:21:17.397 { 00:21:17.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.397 "dma_device_type": 2 00:21:17.397 } 00:21:17.397 ], 00:21:17.397 "driver_specific": {} 00:21:17.397 }' 00:21:17.397 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:17.397 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:17.655 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:17.655 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:17.655 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:17.655 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:17.655 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:17.655 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:17.913 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:17.913 23:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:17.913 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:17.913 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:17.913 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:17.913 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:17.913 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:18.171 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:18.171 "name": "BaseBdev4", 00:21:18.171 "aliases": [ 00:21:18.171 "97e44532-f69f-42dc-aa76-34441b613f1d" 00:21:18.171 ], 00:21:18.171 "product_name": "Malloc disk", 00:21:18.171 "block_size": 512, 00:21:18.171 "num_blocks": 65536, 00:21:18.171 "uuid": "97e44532-f69f-42dc-aa76-34441b613f1d", 00:21:18.171 "assigned_rate_limits": { 00:21:18.171 "rw_ios_per_sec": 0, 00:21:18.171 "rw_mbytes_per_sec": 0, 00:21:18.171 "r_mbytes_per_sec": 0, 00:21:18.171 "w_mbytes_per_sec": 0 00:21:18.171 }, 00:21:18.171 "claimed": true, 00:21:18.171 "claim_type": "exclusive_write", 00:21:18.171 "zoned": false, 00:21:18.171 "supported_io_types": { 00:21:18.171 "read": true, 00:21:18.171 "write": true, 00:21:18.171 "unmap": true, 00:21:18.171 "write_zeroes": true, 00:21:18.171 "flush": true, 00:21:18.171 "reset": true, 00:21:18.171 "compare": false, 00:21:18.171 "compare_and_write": false, 00:21:18.171 "abort": true, 00:21:18.171 "nvme_admin": false, 00:21:18.171 "nvme_io": false 00:21:18.171 }, 00:21:18.171 "memory_domains": [ 00:21:18.171 { 00:21:18.171 "dma_device_id": "system", 00:21:18.171 "dma_device_type": 1 00:21:18.171 }, 00:21:18.171 { 00:21:18.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.171 "dma_device_type": 2 00:21:18.171 } 00:21:18.171 ], 00:21:18.171 "driver_specific": {} 00:21:18.171 }' 00:21:18.171 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:18.171 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:18.171 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:18.171 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:18.429 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:18.429 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:18.429 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:18.429 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:18.429 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:18.429 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:18.690 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:18.690 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:18.690 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:18.956 [2024-05-14 23:36:41.978448] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:18.956 [2024-05-14 23:36:41.978489] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:18.956 [2024-05-14 23:36:41.978559] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.956 [2024-05-14 23:36:41.978765] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.956 [2024-05-14 23:36:41.978793] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:21:18.956 23:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 69783 00:21:18.956 23:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 69783 ']' 00:21:18.956 23:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 69783 00:21:18.956 23:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:21:18.956 23:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:18.956 23:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69783 00:21:18.956 killing process with pid 69783 00:21:18.956 23:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:18.956 23:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:18.956 23:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69783' 00:21:18.956 23:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 69783 00:21:18.956 23:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 69783 00:21:18.956 [2024-05-14 23:36:42.016109] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:19.226 [2024-05-14 23:36:42.344164] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:21:20.600 ************************************ 00:21:20.600 END TEST raid_state_function_test 00:21:20.600 ************************************ 00:21:20.600 00:21:20.600 real 0m34.115s 00:21:20.600 user 1m4.257s 00:21:20.600 sys 0m3.362s 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.600 23:36:43 bdev_raid -- bdev/bdev_raid.sh@816 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:21:20.600 23:36:43 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:20.600 23:36:43 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:20.600 23:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:20.600 ************************************ 00:21:20.600 START TEST raid_state_function_test_sb 00:21:20.600 ************************************ 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev3 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # echo BaseBdev4 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:20.600 Process raid pid: 70886 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=70886 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 70886' 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 70886 /var/tmp/spdk-raid.sock 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 70886 ']' 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:20.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:20.600 23:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.600 [2024-05-14 23:36:43.755916] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:21:20.600 [2024-05-14 23:36:43.756117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.858 [2024-05-14 23:36:43.924236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.116 [2024-05-14 23:36:44.181160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.116 [2024-05-14 23:36:44.375497] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.375 23:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:21.375 23:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:21:21.375 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:21.641 [2024-05-14 23:36:44.724169] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:21.641 [2024-05-14 23:36:44.724265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:21.641 [2024-05-14 23:36:44.724287] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:21.641 [2024-05-14 23:36:44.724323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:21.641 [2024-05-14 23:36:44.724337] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:21.641 [2024-05-14 23:36:44.724398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:21.641 [2024-05-14 23:36:44.724414] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:21.641 [2024-05-14 23:36:44.724447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.641 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.986 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.986 "name": "Existed_Raid", 00:21:21.986 "uuid": "08040458-c724-40d1-8b4c-c84e219ab1ad", 00:21:21.986 "strip_size_kb": 0, 00:21:21.986 "state": "configuring", 00:21:21.986 "raid_level": "raid1", 00:21:21.986 "superblock": true, 00:21:21.986 "num_base_bdevs": 4, 00:21:21.986 "num_base_bdevs_discovered": 0, 00:21:21.986 "num_base_bdevs_operational": 4, 00:21:21.986 "base_bdevs_list": [ 00:21:21.986 { 00:21:21.986 "name": "BaseBdev1", 00:21:21.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.986 "is_configured": false, 00:21:21.986 "data_offset": 0, 00:21:21.986 "data_size": 0 00:21:21.986 }, 00:21:21.986 { 00:21:21.986 "name": "BaseBdev2", 00:21:21.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.986 "is_configured": false, 00:21:21.986 "data_offset": 0, 00:21:21.986 "data_size": 0 00:21:21.986 }, 00:21:21.986 { 00:21:21.986 "name": "BaseBdev3", 00:21:21.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.986 "is_configured": false, 00:21:21.986 "data_offset": 0, 00:21:21.986 "data_size": 0 00:21:21.986 }, 00:21:21.986 { 00:21:21.986 "name": "BaseBdev4", 00:21:21.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.986 "is_configured": false, 00:21:21.986 "data_offset": 0, 00:21:21.986 "data_size": 0 00:21:21.986 } 00:21:21.986 ] 00:21:21.986 }' 00:21:21.986 23:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.986 23:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.552 23:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:22.810 [2024-05-14 23:36:45.892173] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:22.810 [2024-05-14 23:36:45.892228] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:21:22.810 23:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:23.067 [2024-05-14 23:36:46.128259] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:23.067 [2024-05-14 23:36:46.128357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:23.067 [2024-05-14 23:36:46.128386] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:23.067 [2024-05-14 23:36:46.128422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:23.067 [2024-05-14 23:36:46.128437] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:23.067 [2024-05-14 23:36:46.128461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:23.067 [2024-05-14 23:36:46.128474] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:23.067 [2024-05-14 23:36:46.128511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:23.067 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:23.326 [2024-05-14 23:36:46.406565] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:23.326 BaseBdev1 00:21:23.326 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:21:23.326 23:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:23.326 23:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:23.326 23:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:23.326 23:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:23.326 23:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:23.326 23:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:23.326 23:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:23.584 [ 00:21:23.584 { 00:21:23.584 "name": "BaseBdev1", 00:21:23.584 "aliases": [ 00:21:23.584 "a8112cd0-6e6b-4933-b5df-0a9a670e10bf" 00:21:23.584 ], 00:21:23.584 "product_name": "Malloc disk", 00:21:23.584 "block_size": 512, 00:21:23.584 "num_blocks": 65536, 00:21:23.584 "uuid": "a8112cd0-6e6b-4933-b5df-0a9a670e10bf", 00:21:23.584 "assigned_rate_limits": { 00:21:23.584 "rw_ios_per_sec": 0, 00:21:23.584 "rw_mbytes_per_sec": 0, 00:21:23.584 "r_mbytes_per_sec": 0, 00:21:23.584 "w_mbytes_per_sec": 0 00:21:23.584 }, 00:21:23.584 "claimed": true, 00:21:23.584 "claim_type": "exclusive_write", 00:21:23.584 "zoned": false, 00:21:23.585 "supported_io_types": { 00:21:23.585 "read": true, 00:21:23.585 "write": true, 00:21:23.585 "unmap": true, 00:21:23.585 "write_zeroes": true, 00:21:23.585 "flush": true, 00:21:23.585 "reset": true, 00:21:23.585 "compare": false, 00:21:23.585 "compare_and_write": false, 00:21:23.585 "abort": true, 00:21:23.585 "nvme_admin": false, 00:21:23.585 "nvme_io": false 00:21:23.585 }, 00:21:23.585 "memory_domains": [ 00:21:23.585 { 00:21:23.585 "dma_device_id": "system", 00:21:23.585 "dma_device_type": 1 00:21:23.585 }, 00:21:23.585 { 00:21:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.585 "dma_device_type": 2 00:21:23.585 } 00:21:23.585 ], 00:21:23.585 "driver_specific": {} 00:21:23.585 } 00:21:23.585 ] 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.585 23:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.843 23:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.843 "name": "Existed_Raid", 00:21:23.843 "uuid": "e06bf007-9260-4e17-91ce-c52b206a4e1c", 00:21:23.843 "strip_size_kb": 0, 00:21:23.843 "state": "configuring", 00:21:23.843 "raid_level": "raid1", 00:21:23.843 "superblock": true, 00:21:23.843 "num_base_bdevs": 4, 00:21:23.843 "num_base_bdevs_discovered": 1, 00:21:23.843 "num_base_bdevs_operational": 4, 00:21:23.843 "base_bdevs_list": [ 00:21:23.843 { 00:21:23.843 "name": "BaseBdev1", 00:21:23.843 "uuid": "a8112cd0-6e6b-4933-b5df-0a9a670e10bf", 00:21:23.843 "is_configured": true, 00:21:23.843 "data_offset": 2048, 00:21:23.843 "data_size": 63488 00:21:23.843 }, 00:21:23.843 { 00:21:23.843 "name": "BaseBdev2", 00:21:23.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.843 "is_configured": false, 00:21:23.843 "data_offset": 0, 00:21:23.843 "data_size": 0 00:21:23.843 }, 00:21:23.843 { 00:21:23.843 "name": "BaseBdev3", 00:21:23.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.843 "is_configured": false, 00:21:23.843 "data_offset": 0, 00:21:23.843 "data_size": 0 00:21:23.843 }, 00:21:23.843 { 00:21:23.843 "name": "BaseBdev4", 00:21:23.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.843 "is_configured": false, 00:21:23.843 "data_offset": 0, 00:21:23.843 "data_size": 0 00:21:23.843 } 00:21:23.843 ] 00:21:23.843 }' 00:21:23.844 23:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.844 23:36:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.411 23:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:24.671 [2024-05-14 23:36:47.882816] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:24.671 [2024-05-14 23:36:47.882881] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:21:24.671 23:36:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:24.929 [2024-05-14 23:36:48.074896] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:24.929 [2024-05-14 23:36:48.076550] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:24.929 [2024-05-14 23:36:48.076635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:24.929 [2024-05-14 23:36:48.076662] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:24.929 [2024-05-14 23:36:48.076694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:24.929 [2024-05-14 23:36:48.076706] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:24.929 [2024-05-14 23:36:48.076725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.929 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.188 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.188 "name": "Existed_Raid", 00:21:25.188 "uuid": "e2c15a97-fbd8-4788-8c0d-060bd0140a8a", 00:21:25.188 "strip_size_kb": 0, 00:21:25.188 "state": "configuring", 00:21:25.188 "raid_level": "raid1", 00:21:25.188 "superblock": true, 00:21:25.188 "num_base_bdevs": 4, 00:21:25.188 "num_base_bdevs_discovered": 1, 00:21:25.188 "num_base_bdevs_operational": 4, 00:21:25.188 "base_bdevs_list": [ 00:21:25.188 { 00:21:25.188 "name": "BaseBdev1", 00:21:25.188 "uuid": "a8112cd0-6e6b-4933-b5df-0a9a670e10bf", 00:21:25.188 "is_configured": true, 00:21:25.188 "data_offset": 2048, 00:21:25.188 "data_size": 63488 00:21:25.188 }, 00:21:25.188 { 00:21:25.188 "name": "BaseBdev2", 00:21:25.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.188 "is_configured": false, 00:21:25.188 "data_offset": 0, 00:21:25.188 "data_size": 0 00:21:25.188 }, 00:21:25.188 { 00:21:25.188 "name": "BaseBdev3", 00:21:25.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.188 "is_configured": false, 00:21:25.188 "data_offset": 0, 00:21:25.188 "data_size": 0 00:21:25.188 }, 00:21:25.188 { 00:21:25.188 "name": "BaseBdev4", 00:21:25.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.188 "is_configured": false, 00:21:25.188 "data_offset": 0, 00:21:25.188 "data_size": 0 00:21:25.188 } 00:21:25.188 ] 00:21:25.188 }' 00:21:25.188 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.188 23:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.754 23:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:26.012 [2024-05-14 23:36:49.139227] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.012 BaseBdev2 00:21:26.012 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:21:26.012 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:26.012 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:26.012 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:26.012 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:26.012 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:26.012 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:26.270 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:26.528 [ 00:21:26.528 { 00:21:26.528 "name": "BaseBdev2", 00:21:26.528 "aliases": [ 00:21:26.528 "39827138-8f7b-4cd8-ad39-5fb323a9e066" 00:21:26.528 ], 00:21:26.528 "product_name": "Malloc disk", 00:21:26.528 "block_size": 512, 00:21:26.528 "num_blocks": 65536, 00:21:26.528 "uuid": "39827138-8f7b-4cd8-ad39-5fb323a9e066", 00:21:26.528 "assigned_rate_limits": { 00:21:26.528 "rw_ios_per_sec": 0, 00:21:26.528 "rw_mbytes_per_sec": 0, 00:21:26.528 "r_mbytes_per_sec": 0, 00:21:26.528 "w_mbytes_per_sec": 0 00:21:26.528 }, 00:21:26.528 "claimed": true, 00:21:26.528 "claim_type": "exclusive_write", 00:21:26.528 "zoned": false, 00:21:26.528 "supported_io_types": { 00:21:26.528 "read": true, 00:21:26.528 "write": true, 00:21:26.528 "unmap": true, 00:21:26.528 "write_zeroes": true, 00:21:26.528 "flush": true, 00:21:26.528 "reset": true, 00:21:26.528 "compare": false, 00:21:26.528 "compare_and_write": false, 00:21:26.528 "abort": true, 00:21:26.528 "nvme_admin": false, 00:21:26.528 "nvme_io": false 00:21:26.528 }, 00:21:26.528 "memory_domains": [ 00:21:26.528 { 00:21:26.528 "dma_device_id": "system", 00:21:26.528 "dma_device_type": 1 00:21:26.528 }, 00:21:26.528 { 00:21:26.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.528 "dma_device_type": 2 00:21:26.528 } 00:21:26.528 ], 00:21:26.528 "driver_specific": {} 00:21:26.528 } 00:21:26.528 ] 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.528 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.787 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:26.787 "name": "Existed_Raid", 00:21:26.787 "uuid": "e2c15a97-fbd8-4788-8c0d-060bd0140a8a", 00:21:26.787 "strip_size_kb": 0, 00:21:26.787 "state": "configuring", 00:21:26.787 "raid_level": "raid1", 00:21:26.787 "superblock": true, 00:21:26.787 "num_base_bdevs": 4, 00:21:26.787 "num_base_bdevs_discovered": 2, 00:21:26.787 "num_base_bdevs_operational": 4, 00:21:26.787 "base_bdevs_list": [ 00:21:26.787 { 00:21:26.787 "name": "BaseBdev1", 00:21:26.787 "uuid": "a8112cd0-6e6b-4933-b5df-0a9a670e10bf", 00:21:26.787 "is_configured": true, 00:21:26.787 "data_offset": 2048, 00:21:26.787 "data_size": 63488 00:21:26.787 }, 00:21:26.787 { 00:21:26.787 "name": "BaseBdev2", 00:21:26.787 "uuid": "39827138-8f7b-4cd8-ad39-5fb323a9e066", 00:21:26.787 "is_configured": true, 00:21:26.787 "data_offset": 2048, 00:21:26.787 "data_size": 63488 00:21:26.787 }, 00:21:26.787 { 00:21:26.787 "name": "BaseBdev3", 00:21:26.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.787 "is_configured": false, 00:21:26.787 "data_offset": 0, 00:21:26.787 "data_size": 0 00:21:26.787 }, 00:21:26.787 { 00:21:26.787 "name": "BaseBdev4", 00:21:26.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.787 "is_configured": false, 00:21:26.787 "data_offset": 0, 00:21:26.787 "data_size": 0 00:21:26.787 } 00:21:26.787 ] 00:21:26.787 }' 00:21:26.787 23:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:26.787 23:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.379 23:36:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:27.656 BaseBdev3 00:21:27.656 [2024-05-14 23:36:50.783239] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:27.656 23:36:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:21:27.656 23:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:27.656 23:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:27.656 23:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:27.656 23:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:27.656 23:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:27.656 23:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:27.914 23:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:27.914 [ 00:21:27.914 { 00:21:27.914 "name": "BaseBdev3", 00:21:27.914 "aliases": [ 00:21:27.914 "de05c89a-cb28-4f15-8704-d95808a95248" 00:21:27.914 ], 00:21:27.914 "product_name": "Malloc disk", 00:21:27.914 "block_size": 512, 00:21:27.914 "num_blocks": 65536, 00:21:27.914 "uuid": "de05c89a-cb28-4f15-8704-d95808a95248", 00:21:27.914 "assigned_rate_limits": { 00:21:27.914 "rw_ios_per_sec": 0, 00:21:27.914 "rw_mbytes_per_sec": 0, 00:21:27.914 "r_mbytes_per_sec": 0, 00:21:27.914 "w_mbytes_per_sec": 0 00:21:27.914 }, 00:21:27.914 "claimed": true, 00:21:27.914 "claim_type": "exclusive_write", 00:21:27.914 "zoned": false, 00:21:27.914 "supported_io_types": { 00:21:27.914 "read": true, 00:21:27.914 "write": true, 00:21:27.914 "unmap": true, 00:21:27.914 "write_zeroes": true, 00:21:27.914 "flush": true, 00:21:27.914 "reset": true, 00:21:27.914 "compare": false, 00:21:27.914 "compare_and_write": false, 00:21:27.914 "abort": true, 00:21:27.914 "nvme_admin": false, 00:21:27.914 "nvme_io": false 00:21:27.914 }, 00:21:27.914 "memory_domains": [ 00:21:27.914 { 00:21:27.914 "dma_device_id": "system", 00:21:27.914 "dma_device_type": 1 00:21:27.914 }, 00:21:27.914 { 00:21:27.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.914 "dma_device_type": 2 00:21:27.914 } 00:21:27.914 ], 00:21:27.914 "driver_specific": {} 00:21:27.914 } 00:21:27.914 ] 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.914 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.173 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.173 "name": "Existed_Raid", 00:21:28.173 "uuid": "e2c15a97-fbd8-4788-8c0d-060bd0140a8a", 00:21:28.173 "strip_size_kb": 0, 00:21:28.173 "state": "configuring", 00:21:28.173 "raid_level": "raid1", 00:21:28.173 "superblock": true, 00:21:28.173 "num_base_bdevs": 4, 00:21:28.173 "num_base_bdevs_discovered": 3, 00:21:28.173 "num_base_bdevs_operational": 4, 00:21:28.173 "base_bdevs_list": [ 00:21:28.173 { 00:21:28.173 "name": "BaseBdev1", 00:21:28.173 "uuid": "a8112cd0-6e6b-4933-b5df-0a9a670e10bf", 00:21:28.173 "is_configured": true, 00:21:28.173 "data_offset": 2048, 00:21:28.173 "data_size": 63488 00:21:28.173 }, 00:21:28.173 { 00:21:28.173 "name": "BaseBdev2", 00:21:28.173 "uuid": "39827138-8f7b-4cd8-ad39-5fb323a9e066", 00:21:28.173 "is_configured": true, 00:21:28.173 "data_offset": 2048, 00:21:28.173 "data_size": 63488 00:21:28.173 }, 00:21:28.173 { 00:21:28.173 "name": "BaseBdev3", 00:21:28.173 "uuid": "de05c89a-cb28-4f15-8704-d95808a95248", 00:21:28.173 "is_configured": true, 00:21:28.173 "data_offset": 2048, 00:21:28.173 "data_size": 63488 00:21:28.173 }, 00:21:28.173 { 00:21:28.173 "name": "BaseBdev4", 00:21:28.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.173 "is_configured": false, 00:21:28.173 "data_offset": 0, 00:21:28.173 "data_size": 0 00:21:28.173 } 00:21:28.173 ] 00:21:28.173 }' 00:21:28.173 23:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.173 23:36:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.111 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:29.111 [2024-05-14 23:36:52.308003] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:29.111 BaseBdev4 00:21:29.111 [2024-05-14 23:36:52.308505] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:21:29.111 [2024-05-14 23:36:52.308527] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:29.111 [2024-05-14 23:36:52.308648] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:21:29.111 [2024-05-14 23:36:52.308892] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:21:29.111 [2024-05-14 23:36:52.308908] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:21:29.111 [2024-05-14 23:36:52.309040] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.111 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:21:29.111 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:29.111 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:29.111 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:29.111 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:29.111 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:29.111 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:29.370 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:29.630 [ 00:21:29.630 { 00:21:29.630 "name": "BaseBdev4", 00:21:29.630 "aliases": [ 00:21:29.630 "699ef7cf-1ea4-4eb0-8f36-b65e02a68993" 00:21:29.630 ], 00:21:29.630 "product_name": "Malloc disk", 00:21:29.630 "block_size": 512, 00:21:29.630 "num_blocks": 65536, 00:21:29.630 "uuid": "699ef7cf-1ea4-4eb0-8f36-b65e02a68993", 00:21:29.630 "assigned_rate_limits": { 00:21:29.630 "rw_ios_per_sec": 0, 00:21:29.630 "rw_mbytes_per_sec": 0, 00:21:29.630 "r_mbytes_per_sec": 0, 00:21:29.630 "w_mbytes_per_sec": 0 00:21:29.630 }, 00:21:29.630 "claimed": true, 00:21:29.630 "claim_type": "exclusive_write", 00:21:29.630 "zoned": false, 00:21:29.630 "supported_io_types": { 00:21:29.630 "read": true, 00:21:29.630 "write": true, 00:21:29.630 "unmap": true, 00:21:29.630 "write_zeroes": true, 00:21:29.631 "flush": true, 00:21:29.631 "reset": true, 00:21:29.631 "compare": false, 00:21:29.631 "compare_and_write": false, 00:21:29.631 "abort": true, 00:21:29.631 "nvme_admin": false, 00:21:29.631 "nvme_io": false 00:21:29.631 }, 00:21:29.631 "memory_domains": [ 00:21:29.631 { 00:21:29.631 "dma_device_id": "system", 00:21:29.631 "dma_device_type": 1 00:21:29.631 }, 00:21:29.631 { 00:21:29.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.631 "dma_device_type": 2 00:21:29.631 } 00:21:29.631 ], 00:21:29.631 "driver_specific": {} 00:21:29.631 } 00:21:29.631 ] 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.631 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.889 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:29.889 "name": "Existed_Raid", 00:21:29.889 "uuid": "e2c15a97-fbd8-4788-8c0d-060bd0140a8a", 00:21:29.889 "strip_size_kb": 0, 00:21:29.889 "state": "online", 00:21:29.889 "raid_level": "raid1", 00:21:29.889 "superblock": true, 00:21:29.889 "num_base_bdevs": 4, 00:21:29.889 "num_base_bdevs_discovered": 4, 00:21:29.889 "num_base_bdevs_operational": 4, 00:21:29.889 "base_bdevs_list": [ 00:21:29.889 { 00:21:29.889 "name": "BaseBdev1", 00:21:29.889 "uuid": "a8112cd0-6e6b-4933-b5df-0a9a670e10bf", 00:21:29.889 "is_configured": true, 00:21:29.889 "data_offset": 2048, 00:21:29.889 "data_size": 63488 00:21:29.889 }, 00:21:29.889 { 00:21:29.889 "name": "BaseBdev2", 00:21:29.889 "uuid": "39827138-8f7b-4cd8-ad39-5fb323a9e066", 00:21:29.889 "is_configured": true, 00:21:29.889 "data_offset": 2048, 00:21:29.889 "data_size": 63488 00:21:29.889 }, 00:21:29.889 { 00:21:29.889 "name": "BaseBdev3", 00:21:29.889 "uuid": "de05c89a-cb28-4f15-8704-d95808a95248", 00:21:29.889 "is_configured": true, 00:21:29.889 "data_offset": 2048, 00:21:29.889 "data_size": 63488 00:21:29.889 }, 00:21:29.889 { 00:21:29.889 "name": "BaseBdev4", 00:21:29.889 "uuid": "699ef7cf-1ea4-4eb0-8f36-b65e02a68993", 00:21:29.889 "is_configured": true, 00:21:29.889 "data_offset": 2048, 00:21:29.889 "data_size": 63488 00:21:29.889 } 00:21:29.889 ] 00:21:29.889 }' 00:21:29.889 23:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:29.889 23:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.455 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:21:30.455 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:21:30.455 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:30.455 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:30.455 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:30.455 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:21:30.455 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:30.455 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:30.713 [2024-05-14 23:36:53.820495] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:30.713 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:30.713 "name": "Existed_Raid", 00:21:30.713 "aliases": [ 00:21:30.713 "e2c15a97-fbd8-4788-8c0d-060bd0140a8a" 00:21:30.713 ], 00:21:30.713 "product_name": "Raid Volume", 00:21:30.713 "block_size": 512, 00:21:30.713 "num_blocks": 63488, 00:21:30.713 "uuid": "e2c15a97-fbd8-4788-8c0d-060bd0140a8a", 00:21:30.713 "assigned_rate_limits": { 00:21:30.713 "rw_ios_per_sec": 0, 00:21:30.713 "rw_mbytes_per_sec": 0, 00:21:30.713 "r_mbytes_per_sec": 0, 00:21:30.713 "w_mbytes_per_sec": 0 00:21:30.713 }, 00:21:30.713 "claimed": false, 00:21:30.713 "zoned": false, 00:21:30.713 "supported_io_types": { 00:21:30.713 "read": true, 00:21:30.713 "write": true, 00:21:30.713 "unmap": false, 00:21:30.713 "write_zeroes": true, 00:21:30.713 "flush": false, 00:21:30.713 "reset": true, 00:21:30.713 "compare": false, 00:21:30.713 "compare_and_write": false, 00:21:30.713 "abort": false, 00:21:30.713 "nvme_admin": false, 00:21:30.713 "nvme_io": false 00:21:30.713 }, 00:21:30.713 "memory_domains": [ 00:21:30.713 { 00:21:30.713 "dma_device_id": "system", 00:21:30.713 "dma_device_type": 1 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.713 "dma_device_type": 2 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "dma_device_id": "system", 00:21:30.713 "dma_device_type": 1 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.713 "dma_device_type": 2 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "dma_device_id": "system", 00:21:30.713 "dma_device_type": 1 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.713 "dma_device_type": 2 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "dma_device_id": "system", 00:21:30.713 "dma_device_type": 1 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.713 "dma_device_type": 2 00:21:30.713 } 00:21:30.713 ], 00:21:30.713 "driver_specific": { 00:21:30.713 "raid": { 00:21:30.713 "uuid": "e2c15a97-fbd8-4788-8c0d-060bd0140a8a", 00:21:30.713 "strip_size_kb": 0, 00:21:30.713 "state": "online", 00:21:30.713 "raid_level": "raid1", 00:21:30.713 "superblock": true, 00:21:30.713 "num_base_bdevs": 4, 00:21:30.713 "num_base_bdevs_discovered": 4, 00:21:30.713 "num_base_bdevs_operational": 4, 00:21:30.713 "base_bdevs_list": [ 00:21:30.713 { 00:21:30.713 "name": "BaseBdev1", 00:21:30.713 "uuid": "a8112cd0-6e6b-4933-b5df-0a9a670e10bf", 00:21:30.713 "is_configured": true, 00:21:30.713 "data_offset": 2048, 00:21:30.713 "data_size": 63488 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "name": "BaseBdev2", 00:21:30.713 "uuid": "39827138-8f7b-4cd8-ad39-5fb323a9e066", 00:21:30.713 "is_configured": true, 00:21:30.713 "data_offset": 2048, 00:21:30.713 "data_size": 63488 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "name": "BaseBdev3", 00:21:30.713 "uuid": "de05c89a-cb28-4f15-8704-d95808a95248", 00:21:30.713 "is_configured": true, 00:21:30.713 "data_offset": 2048, 00:21:30.713 "data_size": 63488 00:21:30.713 }, 00:21:30.713 { 00:21:30.713 "name": "BaseBdev4", 00:21:30.713 "uuid": "699ef7cf-1ea4-4eb0-8f36-b65e02a68993", 00:21:30.713 "is_configured": true, 00:21:30.713 "data_offset": 2048, 00:21:30.713 "data_size": 63488 00:21:30.713 } 00:21:30.713 ] 00:21:30.713 } 00:21:30.713 } 00:21:30.713 }' 00:21:30.713 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:30.713 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:21:30.713 BaseBdev2 00:21:30.713 BaseBdev3 00:21:30.713 BaseBdev4' 00:21:30.713 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:30.713 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:30.713 23:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:30.972 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:30.972 "name": "BaseBdev1", 00:21:30.972 "aliases": [ 00:21:30.972 "a8112cd0-6e6b-4933-b5df-0a9a670e10bf" 00:21:30.972 ], 00:21:30.972 "product_name": "Malloc disk", 00:21:30.972 "block_size": 512, 00:21:30.972 "num_blocks": 65536, 00:21:30.972 "uuid": "a8112cd0-6e6b-4933-b5df-0a9a670e10bf", 00:21:30.972 "assigned_rate_limits": { 00:21:30.972 "rw_ios_per_sec": 0, 00:21:30.972 "rw_mbytes_per_sec": 0, 00:21:30.972 "r_mbytes_per_sec": 0, 00:21:30.972 "w_mbytes_per_sec": 0 00:21:30.972 }, 00:21:30.972 "claimed": true, 00:21:30.972 "claim_type": "exclusive_write", 00:21:30.972 "zoned": false, 00:21:30.972 "supported_io_types": { 00:21:30.972 "read": true, 00:21:30.972 "write": true, 00:21:30.972 "unmap": true, 00:21:30.972 "write_zeroes": true, 00:21:30.972 "flush": true, 00:21:30.972 "reset": true, 00:21:30.972 "compare": false, 00:21:30.972 "compare_and_write": false, 00:21:30.972 "abort": true, 00:21:30.972 "nvme_admin": false, 00:21:30.972 "nvme_io": false 00:21:30.972 }, 00:21:30.972 "memory_domains": [ 00:21:30.972 { 00:21:30.972 "dma_device_id": "system", 00:21:30.972 "dma_device_type": 1 00:21:30.972 }, 00:21:30.972 { 00:21:30.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.972 "dma_device_type": 2 00:21:30.972 } 00:21:30.972 ], 00:21:30.972 "driver_specific": {} 00:21:30.972 }' 00:21:30.972 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:30.972 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:30.972 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:30.972 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:31.230 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:31.230 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:31.230 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:31.231 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:31.231 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:31.231 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:31.231 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:31.490 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:31.490 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:31.490 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:31.490 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:31.749 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:31.749 "name": "BaseBdev2", 00:21:31.749 "aliases": [ 00:21:31.749 "39827138-8f7b-4cd8-ad39-5fb323a9e066" 00:21:31.749 ], 00:21:31.749 "product_name": "Malloc disk", 00:21:31.749 "block_size": 512, 00:21:31.749 "num_blocks": 65536, 00:21:31.749 "uuid": "39827138-8f7b-4cd8-ad39-5fb323a9e066", 00:21:31.749 "assigned_rate_limits": { 00:21:31.749 "rw_ios_per_sec": 0, 00:21:31.749 "rw_mbytes_per_sec": 0, 00:21:31.749 "r_mbytes_per_sec": 0, 00:21:31.749 "w_mbytes_per_sec": 0 00:21:31.749 }, 00:21:31.749 "claimed": true, 00:21:31.749 "claim_type": "exclusive_write", 00:21:31.749 "zoned": false, 00:21:31.749 "supported_io_types": { 00:21:31.749 "read": true, 00:21:31.749 "write": true, 00:21:31.749 "unmap": true, 00:21:31.749 "write_zeroes": true, 00:21:31.749 "flush": true, 00:21:31.749 "reset": true, 00:21:31.749 "compare": false, 00:21:31.749 "compare_and_write": false, 00:21:31.749 "abort": true, 00:21:31.749 "nvme_admin": false, 00:21:31.749 "nvme_io": false 00:21:31.749 }, 00:21:31.749 "memory_domains": [ 00:21:31.749 { 00:21:31.749 "dma_device_id": "system", 00:21:31.749 "dma_device_type": 1 00:21:31.749 }, 00:21:31.749 { 00:21:31.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.749 "dma_device_type": 2 00:21:31.749 } 00:21:31.749 ], 00:21:31.749 "driver_specific": {} 00:21:31.749 }' 00:21:31.749 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:31.749 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:31.749 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:31.749 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:31.749 23:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:31.749 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:31.749 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:32.009 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:32.009 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:32.009 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:32.009 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:32.009 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:32.009 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:32.009 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:32.009 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:32.268 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:32.268 "name": "BaseBdev3", 00:21:32.268 "aliases": [ 00:21:32.268 "de05c89a-cb28-4f15-8704-d95808a95248" 00:21:32.268 ], 00:21:32.268 "product_name": "Malloc disk", 00:21:32.268 "block_size": 512, 00:21:32.268 "num_blocks": 65536, 00:21:32.268 "uuid": "de05c89a-cb28-4f15-8704-d95808a95248", 00:21:32.268 "assigned_rate_limits": { 00:21:32.268 "rw_ios_per_sec": 0, 00:21:32.268 "rw_mbytes_per_sec": 0, 00:21:32.268 "r_mbytes_per_sec": 0, 00:21:32.268 "w_mbytes_per_sec": 0 00:21:32.268 }, 00:21:32.268 "claimed": true, 00:21:32.268 "claim_type": "exclusive_write", 00:21:32.268 "zoned": false, 00:21:32.268 "supported_io_types": { 00:21:32.268 "read": true, 00:21:32.268 "write": true, 00:21:32.268 "unmap": true, 00:21:32.268 "write_zeroes": true, 00:21:32.268 "flush": true, 00:21:32.268 "reset": true, 00:21:32.268 "compare": false, 00:21:32.268 "compare_and_write": false, 00:21:32.268 "abort": true, 00:21:32.268 "nvme_admin": false, 00:21:32.268 "nvme_io": false 00:21:32.268 }, 00:21:32.268 "memory_domains": [ 00:21:32.268 { 00:21:32.268 "dma_device_id": "system", 00:21:32.268 "dma_device_type": 1 00:21:32.268 }, 00:21:32.268 { 00:21:32.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.268 "dma_device_type": 2 00:21:32.268 } 00:21:32.268 ], 00:21:32.268 "driver_specific": {} 00:21:32.268 }' 00:21:32.268 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:32.268 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:32.527 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:32.527 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:32.527 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:32.527 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:32.527 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:32.527 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:32.527 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:32.527 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:32.786 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:32.786 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:32.786 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:32.786 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:32.786 23:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:33.046 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:33.046 "name": "BaseBdev4", 00:21:33.046 "aliases": [ 00:21:33.046 "699ef7cf-1ea4-4eb0-8f36-b65e02a68993" 00:21:33.046 ], 00:21:33.046 "product_name": "Malloc disk", 00:21:33.046 "block_size": 512, 00:21:33.046 "num_blocks": 65536, 00:21:33.046 "uuid": "699ef7cf-1ea4-4eb0-8f36-b65e02a68993", 00:21:33.046 "assigned_rate_limits": { 00:21:33.046 "rw_ios_per_sec": 0, 00:21:33.046 "rw_mbytes_per_sec": 0, 00:21:33.046 "r_mbytes_per_sec": 0, 00:21:33.046 "w_mbytes_per_sec": 0 00:21:33.046 }, 00:21:33.046 "claimed": true, 00:21:33.046 "claim_type": "exclusive_write", 00:21:33.046 "zoned": false, 00:21:33.046 "supported_io_types": { 00:21:33.046 "read": true, 00:21:33.046 "write": true, 00:21:33.046 "unmap": true, 00:21:33.046 "write_zeroes": true, 00:21:33.046 "flush": true, 00:21:33.046 "reset": true, 00:21:33.046 "compare": false, 00:21:33.046 "compare_and_write": false, 00:21:33.046 "abort": true, 00:21:33.046 "nvme_admin": false, 00:21:33.046 "nvme_io": false 00:21:33.046 }, 00:21:33.046 "memory_domains": [ 00:21:33.046 { 00:21:33.046 "dma_device_id": "system", 00:21:33.046 "dma_device_type": 1 00:21:33.046 }, 00:21:33.046 { 00:21:33.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.046 "dma_device_type": 2 00:21:33.046 } 00:21:33.046 ], 00:21:33.046 "driver_specific": {} 00:21:33.046 }' 00:21:33.046 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:33.046 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:33.046 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:33.046 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:33.046 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:33.305 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.305 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:33.305 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:33.306 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:33.306 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:33.306 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:33.306 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:33.306 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:33.564 [2024-05-14 23:36:56.816770] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.823 23:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.083 23:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.083 "name": "Existed_Raid", 00:21:34.083 "uuid": "e2c15a97-fbd8-4788-8c0d-060bd0140a8a", 00:21:34.083 "strip_size_kb": 0, 00:21:34.083 "state": "online", 00:21:34.083 "raid_level": "raid1", 00:21:34.083 "superblock": true, 00:21:34.083 "num_base_bdevs": 4, 00:21:34.083 "num_base_bdevs_discovered": 3, 00:21:34.083 "num_base_bdevs_operational": 3, 00:21:34.083 "base_bdevs_list": [ 00:21:34.083 { 00:21:34.083 "name": null, 00:21:34.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.083 "is_configured": false, 00:21:34.083 "data_offset": 2048, 00:21:34.083 "data_size": 63488 00:21:34.083 }, 00:21:34.083 { 00:21:34.083 "name": "BaseBdev2", 00:21:34.083 "uuid": "39827138-8f7b-4cd8-ad39-5fb323a9e066", 00:21:34.083 "is_configured": true, 00:21:34.083 "data_offset": 2048, 00:21:34.083 "data_size": 63488 00:21:34.083 }, 00:21:34.083 { 00:21:34.083 "name": "BaseBdev3", 00:21:34.083 "uuid": "de05c89a-cb28-4f15-8704-d95808a95248", 00:21:34.083 "is_configured": true, 00:21:34.083 "data_offset": 2048, 00:21:34.083 "data_size": 63488 00:21:34.083 }, 00:21:34.083 { 00:21:34.083 "name": "BaseBdev4", 00:21:34.083 "uuid": "699ef7cf-1ea4-4eb0-8f36-b65e02a68993", 00:21:34.083 "is_configured": true, 00:21:34.083 "data_offset": 2048, 00:21:34.083 "data_size": 63488 00:21:34.083 } 00:21:34.083 ] 00:21:34.083 }' 00:21:34.083 23:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.083 23:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.651 23:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:34.651 23:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:34.651 23:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.651 23:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:34.925 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:34.925 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:34.925 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:34.925 [2024-05-14 23:36:58.191476] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:35.184 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:35.184 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:35.184 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.184 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:35.443 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:35.443 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:35.443 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:35.443 [2024-05-14 23:36:58.684458] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:35.702 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:35.702 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:35.702 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.702 23:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:35.960 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:35.960 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:35.960 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:35.960 [2024-05-14 23:36:59.215320] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:35.960 [2024-05-14 23:36:59.215399] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.219 [2024-05-14 23:36:59.299298] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.219 [2024-05-14 23:36:59.299405] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.219 [2024-05-14 23:36:59.299422] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:21:36.219 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:36.219 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:36.219 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.219 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:36.477 BaseBdev2 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:36.477 23:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.737 23:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:36.996 [ 00:21:36.996 { 00:21:36.996 "name": "BaseBdev2", 00:21:36.996 "aliases": [ 00:21:36.996 "4186c71c-247e-4370-a8f7-f9acc275a461" 00:21:36.996 ], 00:21:36.996 "product_name": "Malloc disk", 00:21:36.996 "block_size": 512, 00:21:36.996 "num_blocks": 65536, 00:21:36.996 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:36.996 "assigned_rate_limits": { 00:21:36.996 "rw_ios_per_sec": 0, 00:21:36.996 "rw_mbytes_per_sec": 0, 00:21:36.996 "r_mbytes_per_sec": 0, 00:21:36.996 "w_mbytes_per_sec": 0 00:21:36.996 }, 00:21:36.996 "claimed": false, 00:21:36.996 "zoned": false, 00:21:36.996 "supported_io_types": { 00:21:36.996 "read": true, 00:21:36.996 "write": true, 00:21:36.996 "unmap": true, 00:21:36.996 "write_zeroes": true, 00:21:36.996 "flush": true, 00:21:36.996 "reset": true, 00:21:36.996 "compare": false, 00:21:36.996 "compare_and_write": false, 00:21:36.996 "abort": true, 00:21:36.996 "nvme_admin": false, 00:21:36.996 "nvme_io": false 00:21:36.996 }, 00:21:36.996 "memory_domains": [ 00:21:36.996 { 00:21:36.996 "dma_device_id": "system", 00:21:36.996 "dma_device_type": 1 00:21:36.996 }, 00:21:36.996 { 00:21:36.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.996 "dma_device_type": 2 00:21:36.996 } 00:21:36.996 ], 00:21:36.996 "driver_specific": {} 00:21:36.996 } 00:21:36.996 ] 00:21:36.996 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:36.996 23:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:36.996 23:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:36.996 23:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:37.257 BaseBdev3 00:21:37.257 23:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:21:37.257 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:37.257 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:37.257 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:37.257 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:37.257 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:37.257 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:37.516 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:37.777 [ 00:21:37.777 { 00:21:37.777 "name": "BaseBdev3", 00:21:37.777 "aliases": [ 00:21:37.777 "559c9cf2-465d-4cb8-a4ab-5750997af632" 00:21:37.777 ], 00:21:37.777 "product_name": "Malloc disk", 00:21:37.777 "block_size": 512, 00:21:37.777 "num_blocks": 65536, 00:21:37.777 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:37.777 "assigned_rate_limits": { 00:21:37.777 "rw_ios_per_sec": 0, 00:21:37.777 "rw_mbytes_per_sec": 0, 00:21:37.777 "r_mbytes_per_sec": 0, 00:21:37.777 "w_mbytes_per_sec": 0 00:21:37.777 }, 00:21:37.777 "claimed": false, 00:21:37.777 "zoned": false, 00:21:37.777 "supported_io_types": { 00:21:37.777 "read": true, 00:21:37.777 "write": true, 00:21:37.777 "unmap": true, 00:21:37.777 "write_zeroes": true, 00:21:37.777 "flush": true, 00:21:37.777 "reset": true, 00:21:37.777 "compare": false, 00:21:37.777 "compare_and_write": false, 00:21:37.777 "abort": true, 00:21:37.777 "nvme_admin": false, 00:21:37.777 "nvme_io": false 00:21:37.777 }, 00:21:37.777 "memory_domains": [ 00:21:37.777 { 00:21:37.777 "dma_device_id": "system", 00:21:37.777 "dma_device_type": 1 00:21:37.777 }, 00:21:37.777 { 00:21:37.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.777 "dma_device_type": 2 00:21:37.777 } 00:21:37.777 ], 00:21:37.777 "driver_specific": {} 00:21:37.777 } 00:21:37.777 ] 00:21:37.777 23:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:37.777 23:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:37.777 23:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:37.777 23:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:38.131 BaseBdev4 00:21:38.131 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:21:38.131 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:38.131 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:38.131 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:38.131 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:38.131 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:38.131 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.131 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:38.390 [ 00:21:38.390 { 00:21:38.390 "name": "BaseBdev4", 00:21:38.390 "aliases": [ 00:21:38.390 "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1" 00:21:38.390 ], 00:21:38.390 "product_name": "Malloc disk", 00:21:38.390 "block_size": 512, 00:21:38.390 "num_blocks": 65536, 00:21:38.390 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:38.390 "assigned_rate_limits": { 00:21:38.390 "rw_ios_per_sec": 0, 00:21:38.390 "rw_mbytes_per_sec": 0, 00:21:38.390 "r_mbytes_per_sec": 0, 00:21:38.390 "w_mbytes_per_sec": 0 00:21:38.390 }, 00:21:38.390 "claimed": false, 00:21:38.390 "zoned": false, 00:21:38.390 "supported_io_types": { 00:21:38.390 "read": true, 00:21:38.390 "write": true, 00:21:38.390 "unmap": true, 00:21:38.390 "write_zeroes": true, 00:21:38.390 "flush": true, 00:21:38.390 "reset": true, 00:21:38.390 "compare": false, 00:21:38.390 "compare_and_write": false, 00:21:38.390 "abort": true, 00:21:38.390 "nvme_admin": false, 00:21:38.390 "nvme_io": false 00:21:38.390 }, 00:21:38.390 "memory_domains": [ 00:21:38.390 { 00:21:38.390 "dma_device_id": "system", 00:21:38.390 "dma_device_type": 1 00:21:38.390 }, 00:21:38.390 { 00:21:38.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.390 "dma_device_type": 2 00:21:38.390 } 00:21:38.390 ], 00:21:38.390 "driver_specific": {} 00:21:38.390 } 00:21:38.390 ] 00:21:38.390 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:38.390 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:38.390 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:38.390 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:38.652 [2024-05-14 23:37:01.762270] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:38.652 [2024-05-14 23:37:01.762343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:38.652 [2024-05-14 23:37:01.762370] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.652 [2024-05-14 23:37:01.763749] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:38.652 [2024-05-14 23:37:01.763799] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.652 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.653 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.911 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.911 "name": "Existed_Raid", 00:21:38.911 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:38.911 "strip_size_kb": 0, 00:21:38.911 "state": "configuring", 00:21:38.911 "raid_level": "raid1", 00:21:38.911 "superblock": true, 00:21:38.911 "num_base_bdevs": 4, 00:21:38.911 "num_base_bdevs_discovered": 3, 00:21:38.911 "num_base_bdevs_operational": 4, 00:21:38.911 "base_bdevs_list": [ 00:21:38.911 { 00:21:38.911 "name": "BaseBdev1", 00:21:38.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.911 "is_configured": false, 00:21:38.911 "data_offset": 0, 00:21:38.911 "data_size": 0 00:21:38.911 }, 00:21:38.911 { 00:21:38.911 "name": "BaseBdev2", 00:21:38.911 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:38.911 "is_configured": true, 00:21:38.911 "data_offset": 2048, 00:21:38.911 "data_size": 63488 00:21:38.911 }, 00:21:38.911 { 00:21:38.911 "name": "BaseBdev3", 00:21:38.911 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:38.911 "is_configured": true, 00:21:38.911 "data_offset": 2048, 00:21:38.911 "data_size": 63488 00:21:38.911 }, 00:21:38.911 { 00:21:38.911 "name": "BaseBdev4", 00:21:38.911 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:38.911 "is_configured": true, 00:21:38.911 "data_offset": 2048, 00:21:38.911 "data_size": 63488 00:21:38.911 } 00:21:38.911 ] 00:21:38.911 }' 00:21:38.911 23:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.911 23:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.477 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:39.737 [2024-05-14 23:37:02.830415] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.737 23:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.996 23:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:39.996 "name": "Existed_Raid", 00:21:39.996 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:39.996 "strip_size_kb": 0, 00:21:39.996 "state": "configuring", 00:21:39.996 "raid_level": "raid1", 00:21:39.996 "superblock": true, 00:21:39.996 "num_base_bdevs": 4, 00:21:39.996 "num_base_bdevs_discovered": 2, 00:21:39.996 "num_base_bdevs_operational": 4, 00:21:39.996 "base_bdevs_list": [ 00:21:39.996 { 00:21:39.996 "name": "BaseBdev1", 00:21:39.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.996 "is_configured": false, 00:21:39.996 "data_offset": 0, 00:21:39.996 "data_size": 0 00:21:39.996 }, 00:21:39.996 { 00:21:39.996 "name": null, 00:21:39.996 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:39.996 "is_configured": false, 00:21:39.996 "data_offset": 2048, 00:21:39.996 "data_size": 63488 00:21:39.996 }, 00:21:39.996 { 00:21:39.996 "name": "BaseBdev3", 00:21:39.996 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:39.996 "is_configured": true, 00:21:39.996 "data_offset": 2048, 00:21:39.996 "data_size": 63488 00:21:39.996 }, 00:21:39.996 { 00:21:39.996 "name": "BaseBdev4", 00:21:39.996 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:39.996 "is_configured": true, 00:21:39.996 "data_offset": 2048, 00:21:39.996 "data_size": 63488 00:21:39.996 } 00:21:39.996 ] 00:21:39.996 }' 00:21:39.996 23:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:39.996 23:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.564 23:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.564 23:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:40.822 23:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:21:40.822 23:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:41.081 [2024-05-14 23:37:04.165013] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:41.081 BaseBdev1 00:21:41.081 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:21:41.081 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:41.081 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:41.081 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:41.081 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:41.081 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:41.081 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:41.340 [ 00:21:41.340 { 00:21:41.340 "name": "BaseBdev1", 00:21:41.340 "aliases": [ 00:21:41.340 "244915c0-a257-47ab-914e-20d6d6f92fd9" 00:21:41.340 ], 00:21:41.340 "product_name": "Malloc disk", 00:21:41.340 "block_size": 512, 00:21:41.340 "num_blocks": 65536, 00:21:41.340 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:41.340 "assigned_rate_limits": { 00:21:41.340 "rw_ios_per_sec": 0, 00:21:41.340 "rw_mbytes_per_sec": 0, 00:21:41.340 "r_mbytes_per_sec": 0, 00:21:41.340 "w_mbytes_per_sec": 0 00:21:41.340 }, 00:21:41.340 "claimed": true, 00:21:41.340 "claim_type": "exclusive_write", 00:21:41.340 "zoned": false, 00:21:41.340 "supported_io_types": { 00:21:41.340 "read": true, 00:21:41.340 "write": true, 00:21:41.340 "unmap": true, 00:21:41.340 "write_zeroes": true, 00:21:41.340 "flush": true, 00:21:41.340 "reset": true, 00:21:41.340 "compare": false, 00:21:41.340 "compare_and_write": false, 00:21:41.340 "abort": true, 00:21:41.340 "nvme_admin": false, 00:21:41.340 "nvme_io": false 00:21:41.340 }, 00:21:41.340 "memory_domains": [ 00:21:41.340 { 00:21:41.340 "dma_device_id": "system", 00:21:41.340 "dma_device_type": 1 00:21:41.340 }, 00:21:41.340 { 00:21:41.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.340 "dma_device_type": 2 00:21:41.340 } 00:21:41.340 ], 00:21:41.340 "driver_specific": {} 00:21:41.340 } 00:21:41.340 ] 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.340 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.599 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.599 "name": "Existed_Raid", 00:21:41.599 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:41.599 "strip_size_kb": 0, 00:21:41.599 "state": "configuring", 00:21:41.599 "raid_level": "raid1", 00:21:41.599 "superblock": true, 00:21:41.599 "num_base_bdevs": 4, 00:21:41.599 "num_base_bdevs_discovered": 3, 00:21:41.599 "num_base_bdevs_operational": 4, 00:21:41.599 "base_bdevs_list": [ 00:21:41.599 { 00:21:41.599 "name": "BaseBdev1", 00:21:41.599 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:41.599 "is_configured": true, 00:21:41.599 "data_offset": 2048, 00:21:41.599 "data_size": 63488 00:21:41.599 }, 00:21:41.599 { 00:21:41.599 "name": null, 00:21:41.599 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:41.599 "is_configured": false, 00:21:41.599 "data_offset": 2048, 00:21:41.599 "data_size": 63488 00:21:41.599 }, 00:21:41.599 { 00:21:41.599 "name": "BaseBdev3", 00:21:41.599 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:41.599 "is_configured": true, 00:21:41.599 "data_offset": 2048, 00:21:41.599 "data_size": 63488 00:21:41.599 }, 00:21:41.599 { 00:21:41.599 "name": "BaseBdev4", 00:21:41.599 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:41.599 "is_configured": true, 00:21:41.599 "data_offset": 2048, 00:21:41.599 "data_size": 63488 00:21:41.599 } 00:21:41.599 ] 00:21:41.599 }' 00:21:41.599 23:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.599 23:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.167 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.167 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:42.426 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:42.426 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:42.685 [2024-05-14 23:37:05.817552] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.685 23:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.944 23:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.944 "name": "Existed_Raid", 00:21:42.944 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:42.944 "strip_size_kb": 0, 00:21:42.944 "state": "configuring", 00:21:42.944 "raid_level": "raid1", 00:21:42.944 "superblock": true, 00:21:42.944 "num_base_bdevs": 4, 00:21:42.944 "num_base_bdevs_discovered": 2, 00:21:42.944 "num_base_bdevs_operational": 4, 00:21:42.944 "base_bdevs_list": [ 00:21:42.944 { 00:21:42.944 "name": "BaseBdev1", 00:21:42.944 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:42.944 "is_configured": true, 00:21:42.944 "data_offset": 2048, 00:21:42.944 "data_size": 63488 00:21:42.944 }, 00:21:42.944 { 00:21:42.944 "name": null, 00:21:42.944 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:42.944 "is_configured": false, 00:21:42.944 "data_offset": 2048, 00:21:42.944 "data_size": 63488 00:21:42.944 }, 00:21:42.944 { 00:21:42.944 "name": null, 00:21:42.944 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:42.944 "is_configured": false, 00:21:42.944 "data_offset": 2048, 00:21:42.944 "data_size": 63488 00:21:42.944 }, 00:21:42.944 { 00:21:42.944 "name": "BaseBdev4", 00:21:42.944 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:42.944 "is_configured": true, 00:21:42.944 "data_offset": 2048, 00:21:42.944 "data_size": 63488 00:21:42.944 } 00:21:42.944 ] 00:21:42.944 }' 00:21:42.944 23:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.944 23:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.512 23:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.512 23:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:43.771 23:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:21:43.771 23:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:44.029 [2024-05-14 23:37:07.149784] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:44.029 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:44.029 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:44.029 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:44.029 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:44.030 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:44.030 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:44.030 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:44.030 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:44.030 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:44.030 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:44.030 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.030 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.289 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:44.289 "name": "Existed_Raid", 00:21:44.289 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:44.289 "strip_size_kb": 0, 00:21:44.289 "state": "configuring", 00:21:44.289 "raid_level": "raid1", 00:21:44.289 "superblock": true, 00:21:44.289 "num_base_bdevs": 4, 00:21:44.289 "num_base_bdevs_discovered": 3, 00:21:44.289 "num_base_bdevs_operational": 4, 00:21:44.289 "base_bdevs_list": [ 00:21:44.289 { 00:21:44.289 "name": "BaseBdev1", 00:21:44.289 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:44.289 "is_configured": true, 00:21:44.289 "data_offset": 2048, 00:21:44.289 "data_size": 63488 00:21:44.289 }, 00:21:44.289 { 00:21:44.289 "name": null, 00:21:44.289 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:44.289 "is_configured": false, 00:21:44.289 "data_offset": 2048, 00:21:44.289 "data_size": 63488 00:21:44.289 }, 00:21:44.289 { 00:21:44.289 "name": "BaseBdev3", 00:21:44.289 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:44.289 "is_configured": true, 00:21:44.289 "data_offset": 2048, 00:21:44.289 "data_size": 63488 00:21:44.289 }, 00:21:44.289 { 00:21:44.289 "name": "BaseBdev4", 00:21:44.289 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:44.289 "is_configured": true, 00:21:44.289 "data_offset": 2048, 00:21:44.289 "data_size": 63488 00:21:44.289 } 00:21:44.289 ] 00:21:44.289 }' 00:21:44.289 23:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:44.289 23:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.857 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.857 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:45.115 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:21:45.115 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:45.374 [2024-05-14 23:37:08.434036] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.374 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.641 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.642 "name": "Existed_Raid", 00:21:45.642 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:45.642 "strip_size_kb": 0, 00:21:45.642 "state": "configuring", 00:21:45.642 "raid_level": "raid1", 00:21:45.642 "superblock": true, 00:21:45.642 "num_base_bdevs": 4, 00:21:45.642 "num_base_bdevs_discovered": 2, 00:21:45.642 "num_base_bdevs_operational": 4, 00:21:45.642 "base_bdevs_list": [ 00:21:45.642 { 00:21:45.642 "name": null, 00:21:45.642 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:45.642 "is_configured": false, 00:21:45.642 "data_offset": 2048, 00:21:45.642 "data_size": 63488 00:21:45.642 }, 00:21:45.642 { 00:21:45.642 "name": null, 00:21:45.642 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:45.642 "is_configured": false, 00:21:45.642 "data_offset": 2048, 00:21:45.642 "data_size": 63488 00:21:45.642 }, 00:21:45.642 { 00:21:45.642 "name": "BaseBdev3", 00:21:45.642 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:45.642 "is_configured": true, 00:21:45.642 "data_offset": 2048, 00:21:45.642 "data_size": 63488 00:21:45.642 }, 00:21:45.642 { 00:21:45.642 "name": "BaseBdev4", 00:21:45.642 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:45.642 "is_configured": true, 00:21:45.642 "data_offset": 2048, 00:21:45.642 "data_size": 63488 00:21:45.642 } 00:21:45.642 ] 00:21:45.642 }' 00:21:45.642 23:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.642 23:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.220 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.220 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:46.479 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:21:46.479 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:46.739 [2024-05-14 23:37:09.796913] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.739 23:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.739 23:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.739 "name": "Existed_Raid", 00:21:46.739 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:46.739 "strip_size_kb": 0, 00:21:46.739 "state": "configuring", 00:21:46.739 "raid_level": "raid1", 00:21:46.739 "superblock": true, 00:21:46.739 "num_base_bdevs": 4, 00:21:46.739 "num_base_bdevs_discovered": 3, 00:21:46.739 "num_base_bdevs_operational": 4, 00:21:46.739 "base_bdevs_list": [ 00:21:46.739 { 00:21:46.739 "name": null, 00:21:46.739 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:46.739 "is_configured": false, 00:21:46.739 "data_offset": 2048, 00:21:46.739 "data_size": 63488 00:21:46.739 }, 00:21:46.739 { 00:21:46.739 "name": "BaseBdev2", 00:21:46.739 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:46.739 "is_configured": true, 00:21:46.739 "data_offset": 2048, 00:21:46.739 "data_size": 63488 00:21:46.739 }, 00:21:46.739 { 00:21:46.739 "name": "BaseBdev3", 00:21:46.739 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:46.739 "is_configured": true, 00:21:46.739 "data_offset": 2048, 00:21:46.739 "data_size": 63488 00:21:46.739 }, 00:21:46.739 { 00:21:46.739 "name": "BaseBdev4", 00:21:46.739 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:46.739 "is_configured": true, 00:21:46.739 "data_offset": 2048, 00:21:46.739 "data_size": 63488 00:21:46.739 } 00:21:46.739 ] 00:21:46.739 }' 00:21:46.739 23:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.739 23:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.674 23:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.674 23:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:47.674 23:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:21:47.674 23:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.674 23:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:47.933 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 244915c0-a257-47ab-914e-20d6d6f92fd9 00:21:48.192 [2024-05-14 23:37:11.310670] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:48.192 [2024-05-14 23:37:11.310912] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:21:48.192 [2024-05-14 23:37:11.310931] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:48.192 [2024-05-14 23:37:11.311030] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:48.192 NewBaseBdev 00:21:48.192 [2024-05-14 23:37:11.311597] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:21:48.192 [2024-05-14 23:37:11.311621] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011f80 00:21:48.192 [2024-05-14 23:37:11.311732] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.192 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:21:48.192 23:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:21:48.192 23:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:48.192 23:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:48.192 23:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:48.192 23:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:48.192 23:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:48.451 23:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:48.710 [ 00:21:48.710 { 00:21:48.710 "name": "NewBaseBdev", 00:21:48.710 "aliases": [ 00:21:48.710 "244915c0-a257-47ab-914e-20d6d6f92fd9" 00:21:48.710 ], 00:21:48.710 "product_name": "Malloc disk", 00:21:48.710 "block_size": 512, 00:21:48.710 "num_blocks": 65536, 00:21:48.710 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:48.710 "assigned_rate_limits": { 00:21:48.710 "rw_ios_per_sec": 0, 00:21:48.710 "rw_mbytes_per_sec": 0, 00:21:48.710 "r_mbytes_per_sec": 0, 00:21:48.710 "w_mbytes_per_sec": 0 00:21:48.710 }, 00:21:48.710 "claimed": true, 00:21:48.710 "claim_type": "exclusive_write", 00:21:48.710 "zoned": false, 00:21:48.710 "supported_io_types": { 00:21:48.710 "read": true, 00:21:48.710 "write": true, 00:21:48.710 "unmap": true, 00:21:48.710 "write_zeroes": true, 00:21:48.710 "flush": true, 00:21:48.710 "reset": true, 00:21:48.710 "compare": false, 00:21:48.710 "compare_and_write": false, 00:21:48.710 "abort": true, 00:21:48.710 "nvme_admin": false, 00:21:48.710 "nvme_io": false 00:21:48.710 }, 00:21:48.710 "memory_domains": [ 00:21:48.710 { 00:21:48.710 "dma_device_id": "system", 00:21:48.710 "dma_device_type": 1 00:21:48.710 }, 00:21:48.710 { 00:21:48.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.710 "dma_device_type": 2 00:21:48.710 } 00:21:48.710 ], 00:21:48.710 "driver_specific": {} 00:21:48.710 } 00:21:48.710 ] 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.710 23:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.969 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.969 "name": "Existed_Raid", 00:21:48.969 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:48.969 "strip_size_kb": 0, 00:21:48.969 "state": "online", 00:21:48.969 "raid_level": "raid1", 00:21:48.969 "superblock": true, 00:21:48.969 "num_base_bdevs": 4, 00:21:48.969 "num_base_bdevs_discovered": 4, 00:21:48.969 "num_base_bdevs_operational": 4, 00:21:48.969 "base_bdevs_list": [ 00:21:48.969 { 00:21:48.969 "name": "NewBaseBdev", 00:21:48.969 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:48.969 "is_configured": true, 00:21:48.969 "data_offset": 2048, 00:21:48.969 "data_size": 63488 00:21:48.969 }, 00:21:48.969 { 00:21:48.969 "name": "BaseBdev2", 00:21:48.969 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:48.969 "is_configured": true, 00:21:48.969 "data_offset": 2048, 00:21:48.969 "data_size": 63488 00:21:48.969 }, 00:21:48.969 { 00:21:48.969 "name": "BaseBdev3", 00:21:48.969 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:48.969 "is_configured": true, 00:21:48.969 "data_offset": 2048, 00:21:48.969 "data_size": 63488 00:21:48.969 }, 00:21:48.969 { 00:21:48.969 "name": "BaseBdev4", 00:21:48.969 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:48.969 "is_configured": true, 00:21:48.969 "data_offset": 2048, 00:21:48.969 "data_size": 63488 00:21:48.969 } 00:21:48.969 ] 00:21:48.969 }' 00:21:48.969 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.969 23:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.537 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:21:49.537 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:21:49.537 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:49.537 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:49.537 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:49.537 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:21:49.537 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:49.537 23:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:49.796 [2024-05-14 23:37:13.027271] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.796 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:49.796 "name": "Existed_Raid", 00:21:49.796 "aliases": [ 00:21:49.796 "29d0ff45-1032-48f4-adc3-51900bb0416f" 00:21:49.796 ], 00:21:49.796 "product_name": "Raid Volume", 00:21:49.796 "block_size": 512, 00:21:49.796 "num_blocks": 63488, 00:21:49.796 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:49.796 "assigned_rate_limits": { 00:21:49.796 "rw_ios_per_sec": 0, 00:21:49.796 "rw_mbytes_per_sec": 0, 00:21:49.796 "r_mbytes_per_sec": 0, 00:21:49.796 "w_mbytes_per_sec": 0 00:21:49.796 }, 00:21:49.796 "claimed": false, 00:21:49.796 "zoned": false, 00:21:49.796 "supported_io_types": { 00:21:49.796 "read": true, 00:21:49.796 "write": true, 00:21:49.796 "unmap": false, 00:21:49.796 "write_zeroes": true, 00:21:49.796 "flush": false, 00:21:49.796 "reset": true, 00:21:49.796 "compare": false, 00:21:49.796 "compare_and_write": false, 00:21:49.796 "abort": false, 00:21:49.796 "nvme_admin": false, 00:21:49.796 "nvme_io": false 00:21:49.796 }, 00:21:49.796 "memory_domains": [ 00:21:49.796 { 00:21:49.796 "dma_device_id": "system", 00:21:49.796 "dma_device_type": 1 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.796 "dma_device_type": 2 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "dma_device_id": "system", 00:21:49.796 "dma_device_type": 1 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.796 "dma_device_type": 2 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "dma_device_id": "system", 00:21:49.796 "dma_device_type": 1 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.796 "dma_device_type": 2 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "dma_device_id": "system", 00:21:49.796 "dma_device_type": 1 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.796 "dma_device_type": 2 00:21:49.796 } 00:21:49.796 ], 00:21:49.796 "driver_specific": { 00:21:49.796 "raid": { 00:21:49.796 "uuid": "29d0ff45-1032-48f4-adc3-51900bb0416f", 00:21:49.796 "strip_size_kb": 0, 00:21:49.796 "state": "online", 00:21:49.796 "raid_level": "raid1", 00:21:49.796 "superblock": true, 00:21:49.796 "num_base_bdevs": 4, 00:21:49.796 "num_base_bdevs_discovered": 4, 00:21:49.796 "num_base_bdevs_operational": 4, 00:21:49.796 "base_bdevs_list": [ 00:21:49.796 { 00:21:49.796 "name": "NewBaseBdev", 00:21:49.796 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:49.796 "is_configured": true, 00:21:49.796 "data_offset": 2048, 00:21:49.796 "data_size": 63488 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "name": "BaseBdev2", 00:21:49.796 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:49.796 "is_configured": true, 00:21:49.796 "data_offset": 2048, 00:21:49.796 "data_size": 63488 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "name": "BaseBdev3", 00:21:49.796 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:49.796 "is_configured": true, 00:21:49.796 "data_offset": 2048, 00:21:49.796 "data_size": 63488 00:21:49.796 }, 00:21:49.796 { 00:21:49.796 "name": "BaseBdev4", 00:21:49.796 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:49.796 "is_configured": true, 00:21:49.796 "data_offset": 2048, 00:21:49.796 "data_size": 63488 00:21:49.796 } 00:21:49.796 ] 00:21:49.796 } 00:21:49.796 } 00:21:49.796 }' 00:21:49.796 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:50.055 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:21:50.055 BaseBdev2 00:21:50.055 BaseBdev3 00:21:50.055 BaseBdev4' 00:21:50.055 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:50.055 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:50.055 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:50.055 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:50.055 "name": "NewBaseBdev", 00:21:50.055 "aliases": [ 00:21:50.055 "244915c0-a257-47ab-914e-20d6d6f92fd9" 00:21:50.055 ], 00:21:50.055 "product_name": "Malloc disk", 00:21:50.055 "block_size": 512, 00:21:50.055 "num_blocks": 65536, 00:21:50.055 "uuid": "244915c0-a257-47ab-914e-20d6d6f92fd9", 00:21:50.055 "assigned_rate_limits": { 00:21:50.055 "rw_ios_per_sec": 0, 00:21:50.055 "rw_mbytes_per_sec": 0, 00:21:50.055 "r_mbytes_per_sec": 0, 00:21:50.055 "w_mbytes_per_sec": 0 00:21:50.055 }, 00:21:50.055 "claimed": true, 00:21:50.055 "claim_type": "exclusive_write", 00:21:50.055 "zoned": false, 00:21:50.055 "supported_io_types": { 00:21:50.055 "read": true, 00:21:50.055 "write": true, 00:21:50.055 "unmap": true, 00:21:50.055 "write_zeroes": true, 00:21:50.055 "flush": true, 00:21:50.055 "reset": true, 00:21:50.055 "compare": false, 00:21:50.055 "compare_and_write": false, 00:21:50.055 "abort": true, 00:21:50.055 "nvme_admin": false, 00:21:50.055 "nvme_io": false 00:21:50.055 }, 00:21:50.055 "memory_domains": [ 00:21:50.055 { 00:21:50.055 "dma_device_id": "system", 00:21:50.055 "dma_device_type": 1 00:21:50.055 }, 00:21:50.055 { 00:21:50.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.055 "dma_device_type": 2 00:21:50.055 } 00:21:50.055 ], 00:21:50.055 "driver_specific": {} 00:21:50.055 }' 00:21:50.055 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:50.314 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:50.314 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:50.314 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:50.314 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:50.314 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:50.314 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:50.574 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:50.574 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:50.574 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:50.574 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:50.574 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:50.574 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:50.574 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:50.574 23:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:50.832 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:50.832 "name": "BaseBdev2", 00:21:50.832 "aliases": [ 00:21:50.832 "4186c71c-247e-4370-a8f7-f9acc275a461" 00:21:50.832 ], 00:21:50.832 "product_name": "Malloc disk", 00:21:50.832 "block_size": 512, 00:21:50.832 "num_blocks": 65536, 00:21:50.832 "uuid": "4186c71c-247e-4370-a8f7-f9acc275a461", 00:21:50.832 "assigned_rate_limits": { 00:21:50.832 "rw_ios_per_sec": 0, 00:21:50.832 "rw_mbytes_per_sec": 0, 00:21:50.832 "r_mbytes_per_sec": 0, 00:21:50.832 "w_mbytes_per_sec": 0 00:21:50.832 }, 00:21:50.832 "claimed": true, 00:21:50.832 "claim_type": "exclusive_write", 00:21:50.832 "zoned": false, 00:21:50.832 "supported_io_types": { 00:21:50.832 "read": true, 00:21:50.832 "write": true, 00:21:50.832 "unmap": true, 00:21:50.832 "write_zeroes": true, 00:21:50.832 "flush": true, 00:21:50.832 "reset": true, 00:21:50.832 "compare": false, 00:21:50.832 "compare_and_write": false, 00:21:50.832 "abort": true, 00:21:50.832 "nvme_admin": false, 00:21:50.832 "nvme_io": false 00:21:50.832 }, 00:21:50.832 "memory_domains": [ 00:21:50.832 { 00:21:50.832 "dma_device_id": "system", 00:21:50.832 "dma_device_type": 1 00:21:50.832 }, 00:21:50.832 { 00:21:50.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.832 "dma_device_type": 2 00:21:50.832 } 00:21:50.832 ], 00:21:50.832 "driver_specific": {} 00:21:50.832 }' 00:21:50.832 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:50.832 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.090 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:51.090 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.090 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.090 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.090 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.090 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.348 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.348 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.348 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.348 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:51.348 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:51.348 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:51.348 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:51.606 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:51.606 "name": "BaseBdev3", 00:21:51.606 "aliases": [ 00:21:51.606 "559c9cf2-465d-4cb8-a4ab-5750997af632" 00:21:51.606 ], 00:21:51.606 "product_name": "Malloc disk", 00:21:51.606 "block_size": 512, 00:21:51.606 "num_blocks": 65536, 00:21:51.606 "uuid": "559c9cf2-465d-4cb8-a4ab-5750997af632", 00:21:51.606 "assigned_rate_limits": { 00:21:51.606 "rw_ios_per_sec": 0, 00:21:51.606 "rw_mbytes_per_sec": 0, 00:21:51.606 "r_mbytes_per_sec": 0, 00:21:51.606 "w_mbytes_per_sec": 0 00:21:51.606 }, 00:21:51.606 "claimed": true, 00:21:51.606 "claim_type": "exclusive_write", 00:21:51.606 "zoned": false, 00:21:51.606 "supported_io_types": { 00:21:51.606 "read": true, 00:21:51.606 "write": true, 00:21:51.606 "unmap": true, 00:21:51.606 "write_zeroes": true, 00:21:51.606 "flush": true, 00:21:51.606 "reset": true, 00:21:51.606 "compare": false, 00:21:51.606 "compare_and_write": false, 00:21:51.606 "abort": true, 00:21:51.606 "nvme_admin": false, 00:21:51.606 "nvme_io": false 00:21:51.606 }, 00:21:51.606 "memory_domains": [ 00:21:51.606 { 00:21:51.606 "dma_device_id": "system", 00:21:51.606 "dma_device_type": 1 00:21:51.606 }, 00:21:51.606 { 00:21:51.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.606 "dma_device_type": 2 00:21:51.606 } 00:21:51.606 ], 00:21:51.606 "driver_specific": {} 00:21:51.606 }' 00:21:51.606 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.606 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.606 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:51.606 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.866 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.866 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.866 23:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.866 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.866 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.866 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.866 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:52.124 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:52.124 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:52.124 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:52.124 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:52.383 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:52.383 "name": "BaseBdev4", 00:21:52.383 "aliases": [ 00:21:52.383 "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1" 00:21:52.383 ], 00:21:52.383 "product_name": "Malloc disk", 00:21:52.383 "block_size": 512, 00:21:52.383 "num_blocks": 65536, 00:21:52.383 "uuid": "a1a235ac-951f-4f84-a5f2-f0c3e20f7eb1", 00:21:52.383 "assigned_rate_limits": { 00:21:52.383 "rw_ios_per_sec": 0, 00:21:52.383 "rw_mbytes_per_sec": 0, 00:21:52.383 "r_mbytes_per_sec": 0, 00:21:52.383 "w_mbytes_per_sec": 0 00:21:52.383 }, 00:21:52.383 "claimed": true, 00:21:52.383 "claim_type": "exclusive_write", 00:21:52.383 "zoned": false, 00:21:52.383 "supported_io_types": { 00:21:52.383 "read": true, 00:21:52.383 "write": true, 00:21:52.383 "unmap": true, 00:21:52.383 "write_zeroes": true, 00:21:52.383 "flush": true, 00:21:52.383 "reset": true, 00:21:52.383 "compare": false, 00:21:52.383 "compare_and_write": false, 00:21:52.383 "abort": true, 00:21:52.383 "nvme_admin": false, 00:21:52.383 "nvme_io": false 00:21:52.383 }, 00:21:52.383 "memory_domains": [ 00:21:52.383 { 00:21:52.383 "dma_device_id": "system", 00:21:52.383 "dma_device_type": 1 00:21:52.383 }, 00:21:52.383 { 00:21:52.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.383 "dma_device_type": 2 00:21:52.383 } 00:21:52.383 ], 00:21:52.383 "driver_specific": {} 00:21:52.383 }' 00:21:52.383 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:52.383 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:52.383 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:52.383 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:52.383 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:52.383 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.383 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:52.641 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:52.641 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.641 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:52.641 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:52.641 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:52.641 23:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:52.903 [2024-05-14 23:37:16.007543] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:52.903 [2024-05-14 23:37:16.007587] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.903 [2024-05-14 23:37:16.007662] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.903 [2024-05-14 23:37:16.007880] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.903 [2024-05-14 23:37:16.007905] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name Existed_Raid, state offline 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 70886 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 70886 ']' 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 70886 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70886 00:21:52.903 killing process with pid 70886 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70886' 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 70886 00:21:52.903 23:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 70886 00:21:52.903 [2024-05-14 23:37:16.039836] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:53.161 [2024-05-14 23:37:16.362300] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.632 ************************************ 00:21:54.632 END TEST raid_state_function_test_sb 00:21:54.632 ************************************ 00:21:54.632 23:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:21:54.632 00:21:54.632 real 0m34.042s 00:21:54.632 user 1m3.936s 00:21:54.632 sys 0m3.392s 00:21:54.632 23:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:54.632 23:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.632 23:37:17 bdev_raid -- bdev/bdev_raid.sh@817 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:21:54.632 23:37:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:54.632 23:37:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:54.632 23:37:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:54.632 ************************************ 00:21:54.632 START TEST raid_superblock_test 00:21:54.632 ************************************ 00:21:54.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71999 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71999 /var/tmp/spdk-raid.sock 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 71999 ']' 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.632 23:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:54.632 [2024-05-14 23:37:17.844484] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:21:54.632 [2024-05-14 23:37:17.844669] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71999 ] 00:21:54.891 [2024-05-14 23:37:18.013505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.150 [2024-05-14 23:37:18.271309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.408 [2024-05-14 23:37:18.475108] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:55.668 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:55.927 malloc1 00:21:55.927 23:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:56.187 [2024-05-14 23:37:19.229031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:56.187 [2024-05-14 23:37:19.229130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.187 [2024-05-14 23:37:19.229391] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:21:56.187 [2024-05-14 23:37:19.229449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.187 pt1 00:21:56.187 [2024-05-14 23:37:19.231199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.187 [2024-05-14 23:37:19.231236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:56.187 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:56.446 malloc2 00:21:56.446 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:56.706 [2024-05-14 23:37:19.756441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:56.706 [2024-05-14 23:37:19.756550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.706 [2024-05-14 23:37:19.756599] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:21:56.706 [2024-05-14 23:37:19.756639] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.706 [2024-05-14 23:37:19.758612] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.706 [2024-05-14 23:37:19.758657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:56.706 pt2 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:56.706 23:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:56.965 malloc3 00:21:56.965 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:57.224 [2024-05-14 23:37:20.252365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:57.224 [2024-05-14 23:37:20.252471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.224 [2024-05-14 23:37:20.252522] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002af80 00:21:57.224 [2024-05-14 23:37:20.252583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.224 [2024-05-14 23:37:20.254638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.224 [2024-05-14 23:37:20.254686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:57.224 pt3 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:57.224 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:57.483 malloc4 00:21:57.483 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:57.483 [2024-05-14 23:37:20.702917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:57.483 [2024-05-14 23:37:20.703028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.483 [2024-05-14 23:37:20.703078] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:21:57.483 [2024-05-14 23:37:20.703143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.483 [2024-05-14 23:37:20.705130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.483 [2024-05-14 23:37:20.705185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:57.483 pt4 00:21:57.483 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:57.483 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:57.483 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:57.742 [2024-05-14 23:37:20.895020] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:57.742 [2024-05-14 23:37:20.896720] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:57.742 [2024-05-14 23:37:20.896779] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:57.742 [2024-05-14 23:37:20.896814] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:57.742 [2024-05-14 23:37:20.896945] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:21:57.742 [2024-05-14 23:37:20.896959] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:57.742 [2024-05-14 23:37:20.897090] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:57.742 [2024-05-14 23:37:20.897348] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:21:57.742 [2024-05-14 23:37:20.897363] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:21:57.742 [2024-05-14 23:37:20.897481] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:57.742 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:57.743 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.743 23:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.000 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.000 "name": "raid_bdev1", 00:21:58.000 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:21:58.000 "strip_size_kb": 0, 00:21:58.000 "state": "online", 00:21:58.000 "raid_level": "raid1", 00:21:58.000 "superblock": true, 00:21:58.000 "num_base_bdevs": 4, 00:21:58.000 "num_base_bdevs_discovered": 4, 00:21:58.000 "num_base_bdevs_operational": 4, 00:21:58.000 "base_bdevs_list": [ 00:21:58.000 { 00:21:58.001 "name": "pt1", 00:21:58.001 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:21:58.001 "is_configured": true, 00:21:58.001 "data_offset": 2048, 00:21:58.001 "data_size": 63488 00:21:58.001 }, 00:21:58.001 { 00:21:58.001 "name": "pt2", 00:21:58.001 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:21:58.001 "is_configured": true, 00:21:58.001 "data_offset": 2048, 00:21:58.001 "data_size": 63488 00:21:58.001 }, 00:21:58.001 { 00:21:58.001 "name": "pt3", 00:21:58.001 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:21:58.001 "is_configured": true, 00:21:58.001 "data_offset": 2048, 00:21:58.001 "data_size": 63488 00:21:58.001 }, 00:21:58.001 { 00:21:58.001 "name": "pt4", 00:21:58.001 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:21:58.001 "is_configured": true, 00:21:58.001 "data_offset": 2048, 00:21:58.001 "data_size": 63488 00:21:58.001 } 00:21:58.001 ] 00:21:58.001 }' 00:21:58.001 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.001 23:37:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.632 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:58.632 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:21:58.632 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:58.632 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:58.632 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:58.632 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:21:58.632 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:58.632 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:58.892 [2024-05-14 23:37:21.931296] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.892 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:58.892 "name": "raid_bdev1", 00:21:58.892 "aliases": [ 00:21:58.892 "2580db67-7398-4a86-8377-32ba9259d5a7" 00:21:58.892 ], 00:21:58.892 "product_name": "Raid Volume", 00:21:58.892 "block_size": 512, 00:21:58.892 "num_blocks": 63488, 00:21:58.892 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:21:58.892 "assigned_rate_limits": { 00:21:58.892 "rw_ios_per_sec": 0, 00:21:58.892 "rw_mbytes_per_sec": 0, 00:21:58.892 "r_mbytes_per_sec": 0, 00:21:58.892 "w_mbytes_per_sec": 0 00:21:58.892 }, 00:21:58.892 "claimed": false, 00:21:58.892 "zoned": false, 00:21:58.892 "supported_io_types": { 00:21:58.892 "read": true, 00:21:58.892 "write": true, 00:21:58.892 "unmap": false, 00:21:58.892 "write_zeroes": true, 00:21:58.892 "flush": false, 00:21:58.892 "reset": true, 00:21:58.892 "compare": false, 00:21:58.892 "compare_and_write": false, 00:21:58.892 "abort": false, 00:21:58.892 "nvme_admin": false, 00:21:58.892 "nvme_io": false 00:21:58.892 }, 00:21:58.892 "memory_domains": [ 00:21:58.892 { 00:21:58.892 "dma_device_id": "system", 00:21:58.892 "dma_device_type": 1 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.892 "dma_device_type": 2 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "dma_device_id": "system", 00:21:58.892 "dma_device_type": 1 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.892 "dma_device_type": 2 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "dma_device_id": "system", 00:21:58.892 "dma_device_type": 1 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.892 "dma_device_type": 2 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "dma_device_id": "system", 00:21:58.892 "dma_device_type": 1 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.892 "dma_device_type": 2 00:21:58.892 } 00:21:58.892 ], 00:21:58.892 "driver_specific": { 00:21:58.892 "raid": { 00:21:58.892 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:21:58.892 "strip_size_kb": 0, 00:21:58.892 "state": "online", 00:21:58.892 "raid_level": "raid1", 00:21:58.892 "superblock": true, 00:21:58.892 "num_base_bdevs": 4, 00:21:58.892 "num_base_bdevs_discovered": 4, 00:21:58.892 "num_base_bdevs_operational": 4, 00:21:58.892 "base_bdevs_list": [ 00:21:58.892 { 00:21:58.892 "name": "pt1", 00:21:58.892 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:21:58.892 "is_configured": true, 00:21:58.892 "data_offset": 2048, 00:21:58.892 "data_size": 63488 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "name": "pt2", 00:21:58.892 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:21:58.892 "is_configured": true, 00:21:58.892 "data_offset": 2048, 00:21:58.892 "data_size": 63488 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "name": "pt3", 00:21:58.892 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:21:58.892 "is_configured": true, 00:21:58.892 "data_offset": 2048, 00:21:58.892 "data_size": 63488 00:21:58.892 }, 00:21:58.892 { 00:21:58.892 "name": "pt4", 00:21:58.892 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:21:58.892 "is_configured": true, 00:21:58.892 "data_offset": 2048, 00:21:58.892 "data_size": 63488 00:21:58.892 } 00:21:58.892 ] 00:21:58.892 } 00:21:58.892 } 00:21:58.892 }' 00:21:58.892 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:58.892 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:21:58.892 pt2 00:21:58.892 pt3 00:21:58.892 pt4' 00:21:58.892 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:58.892 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:58.892 23:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:59.151 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:59.151 "name": "pt1", 00:21:59.151 "aliases": [ 00:21:59.151 "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff" 00:21:59.151 ], 00:21:59.151 "product_name": "passthru", 00:21:59.151 "block_size": 512, 00:21:59.151 "num_blocks": 65536, 00:21:59.151 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:21:59.151 "assigned_rate_limits": { 00:21:59.151 "rw_ios_per_sec": 0, 00:21:59.151 "rw_mbytes_per_sec": 0, 00:21:59.151 "r_mbytes_per_sec": 0, 00:21:59.151 "w_mbytes_per_sec": 0 00:21:59.151 }, 00:21:59.151 "claimed": true, 00:21:59.151 "claim_type": "exclusive_write", 00:21:59.151 "zoned": false, 00:21:59.151 "supported_io_types": { 00:21:59.151 "read": true, 00:21:59.151 "write": true, 00:21:59.151 "unmap": true, 00:21:59.151 "write_zeroes": true, 00:21:59.151 "flush": true, 00:21:59.151 "reset": true, 00:21:59.151 "compare": false, 00:21:59.151 "compare_and_write": false, 00:21:59.151 "abort": true, 00:21:59.151 "nvme_admin": false, 00:21:59.151 "nvme_io": false 00:21:59.151 }, 00:21:59.151 "memory_domains": [ 00:21:59.151 { 00:21:59.151 "dma_device_id": "system", 00:21:59.151 "dma_device_type": 1 00:21:59.151 }, 00:21:59.151 { 00:21:59.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.151 "dma_device_type": 2 00:21:59.151 } 00:21:59.151 ], 00:21:59.151 "driver_specific": { 00:21:59.152 "passthru": { 00:21:59.152 "name": "pt1", 00:21:59.152 "base_bdev_name": "malloc1" 00:21:59.152 } 00:21:59.152 } 00:21:59.152 }' 00:21:59.152 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:59.152 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:59.152 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:59.152 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:59.152 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:59.410 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:59.669 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:59.669 "name": "pt2", 00:21:59.669 "aliases": [ 00:21:59.669 "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935" 00:21:59.669 ], 00:21:59.669 "product_name": "passthru", 00:21:59.669 "block_size": 512, 00:21:59.669 "num_blocks": 65536, 00:21:59.669 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:21:59.669 "assigned_rate_limits": { 00:21:59.669 "rw_ios_per_sec": 0, 00:21:59.669 "rw_mbytes_per_sec": 0, 00:21:59.669 "r_mbytes_per_sec": 0, 00:21:59.669 "w_mbytes_per_sec": 0 00:21:59.669 }, 00:21:59.669 "claimed": true, 00:21:59.669 "claim_type": "exclusive_write", 00:21:59.669 "zoned": false, 00:21:59.669 "supported_io_types": { 00:21:59.669 "read": true, 00:21:59.669 "write": true, 00:21:59.669 "unmap": true, 00:21:59.669 "write_zeroes": true, 00:21:59.669 "flush": true, 00:21:59.669 "reset": true, 00:21:59.669 "compare": false, 00:21:59.669 "compare_and_write": false, 00:21:59.669 "abort": true, 00:21:59.669 "nvme_admin": false, 00:21:59.669 "nvme_io": false 00:21:59.669 }, 00:21:59.669 "memory_domains": [ 00:21:59.669 { 00:21:59.669 "dma_device_id": "system", 00:21:59.669 "dma_device_type": 1 00:21:59.669 }, 00:21:59.669 { 00:21:59.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.669 "dma_device_type": 2 00:21:59.669 } 00:21:59.669 ], 00:21:59.669 "driver_specific": { 00:21:59.669 "passthru": { 00:21:59.669 "name": "pt2", 00:21:59.669 "base_bdev_name": "malloc2" 00:21:59.669 } 00:21:59.669 } 00:21:59.669 }' 00:21:59.669 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:59.927 23:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:59.927 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:59.927 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:59.927 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:59.927 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.927 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:59.927 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.186 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:00.186 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:00.186 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:00.186 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:00.186 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:00.186 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:00.186 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:00.445 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:00.445 "name": "pt3", 00:22:00.445 "aliases": [ 00:22:00.445 "c627baf6-7e5c-5df1-a2d3-feff114c0198" 00:22:00.445 ], 00:22:00.445 "product_name": "passthru", 00:22:00.445 "block_size": 512, 00:22:00.445 "num_blocks": 65536, 00:22:00.445 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:00.445 "assigned_rate_limits": { 00:22:00.445 "rw_ios_per_sec": 0, 00:22:00.445 "rw_mbytes_per_sec": 0, 00:22:00.445 "r_mbytes_per_sec": 0, 00:22:00.445 "w_mbytes_per_sec": 0 00:22:00.445 }, 00:22:00.445 "claimed": true, 00:22:00.445 "claim_type": "exclusive_write", 00:22:00.445 "zoned": false, 00:22:00.445 "supported_io_types": { 00:22:00.445 "read": true, 00:22:00.445 "write": true, 00:22:00.445 "unmap": true, 00:22:00.445 "write_zeroes": true, 00:22:00.445 "flush": true, 00:22:00.445 "reset": true, 00:22:00.445 "compare": false, 00:22:00.445 "compare_and_write": false, 00:22:00.445 "abort": true, 00:22:00.445 "nvme_admin": false, 00:22:00.445 "nvme_io": false 00:22:00.445 }, 00:22:00.445 "memory_domains": [ 00:22:00.445 { 00:22:00.445 "dma_device_id": "system", 00:22:00.445 "dma_device_type": 1 00:22:00.445 }, 00:22:00.445 { 00:22:00.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.445 "dma_device_type": 2 00:22:00.445 } 00:22:00.445 ], 00:22:00.445 "driver_specific": { 00:22:00.445 "passthru": { 00:22:00.445 "name": "pt3", 00:22:00.445 "base_bdev_name": "malloc3" 00:22:00.445 } 00:22:00.445 } 00:22:00.445 }' 00:22:00.445 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.445 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.704 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:00.704 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.704 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.704 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:00.704 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.704 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.704 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:00.704 23:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:00.963 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:00.963 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:00.963 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:00.963 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:00.963 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:01.234 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:01.235 "name": "pt4", 00:22:01.235 "aliases": [ 00:22:01.235 "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a" 00:22:01.235 ], 00:22:01.235 "product_name": "passthru", 00:22:01.235 "block_size": 512, 00:22:01.235 "num_blocks": 65536, 00:22:01.235 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:01.235 "assigned_rate_limits": { 00:22:01.235 "rw_ios_per_sec": 0, 00:22:01.235 "rw_mbytes_per_sec": 0, 00:22:01.235 "r_mbytes_per_sec": 0, 00:22:01.235 "w_mbytes_per_sec": 0 00:22:01.235 }, 00:22:01.235 "claimed": true, 00:22:01.235 "claim_type": "exclusive_write", 00:22:01.235 "zoned": false, 00:22:01.235 "supported_io_types": { 00:22:01.235 "read": true, 00:22:01.235 "write": true, 00:22:01.235 "unmap": true, 00:22:01.235 "write_zeroes": true, 00:22:01.235 "flush": true, 00:22:01.235 "reset": true, 00:22:01.235 "compare": false, 00:22:01.235 "compare_and_write": false, 00:22:01.235 "abort": true, 00:22:01.235 "nvme_admin": false, 00:22:01.235 "nvme_io": false 00:22:01.235 }, 00:22:01.235 "memory_domains": [ 00:22:01.235 { 00:22:01.235 "dma_device_id": "system", 00:22:01.235 "dma_device_type": 1 00:22:01.235 }, 00:22:01.235 { 00:22:01.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.235 "dma_device_type": 2 00:22:01.235 } 00:22:01.235 ], 00:22:01.235 "driver_specific": { 00:22:01.235 "passthru": { 00:22:01.235 "name": "pt4", 00:22:01.235 "base_bdev_name": "malloc4" 00:22:01.235 } 00:22:01.235 } 00:22:01.235 }' 00:22:01.235 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:01.235 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:01.235 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:01.235 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:01.235 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:01.503 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:01.504 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:01.504 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:01.504 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:01.504 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:01.504 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:01.504 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:01.504 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:01.504 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:01.762 [2024-05-14 23:37:24.951707] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.762 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2580db67-7398-4a86-8377-32ba9259d5a7 00:22:01.762 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2580db67-7398-4a86-8377-32ba9259d5a7 ']' 00:22:01.762 23:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:02.020 [2024-05-14 23:37:25.143590] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.020 [2024-05-14 23:37:25.143631] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.020 [2024-05-14 23:37:25.143718] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.020 [2024-05-14 23:37:25.143796] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.020 [2024-05-14 23:37:25.143812] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:22:02.020 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:02.020 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.300 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:02.300 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:02.300 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:02.300 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:02.569 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:02.569 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:02.827 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:02.827 23:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:03.086 23:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:03.086 23:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:03.086 23:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:03.086 23:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:03.345 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:03.605 [2024-05-14 23:37:26.827982] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:03.605 request: 00:22:03.605 { 00:22:03.605 "name": "raid_bdev1", 00:22:03.605 "raid_level": "raid1", 00:22:03.605 "base_bdevs": [ 00:22:03.605 "malloc1", 00:22:03.605 "malloc2", 00:22:03.605 "malloc3", 00:22:03.605 "malloc4" 00:22:03.605 ], 00:22:03.605 "superblock": false, 00:22:03.605 "method": "bdev_raid_create", 00:22:03.605 "req_id": 1 00:22:03.605 } 00:22:03.605 Got JSON-RPC error response 00:22:03.605 response: 00:22:03.605 { 00:22:03.605 "code": -17, 00:22:03.605 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:03.605 } 00:22:03.605 [2024-05-14 23:37:26.829797] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:03.605 [2024-05-14 23:37:26.829854] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:03.605 [2024-05-14 23:37:26.829887] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:03.605 [2024-05-14 23:37:26.829941] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:03.605 [2024-05-14 23:37:26.830004] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:03.605 [2024-05-14 23:37:26.830040] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:03.605 [2024-05-14 23:37:26.830096] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:03.605 [2024-05-14 23:37:26.830127] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:03.605 [2024-05-14 23:37:26.830139] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:22:03.605 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:03.605 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.605 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.605 23:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.605 23:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.605 23:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:03.864 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:03.864 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:03.864 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:04.123 [2024-05-14 23:37:27.311966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:04.123 [2024-05-14 23:37:27.312058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.123 [2024-05-14 23:37:27.312107] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f780 00:22:04.123 [2024-05-14 23:37:27.312420] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.123 [2024-05-14 23:37:27.314135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.123 [2024-05-14 23:37:27.314203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:04.123 [2024-05-14 23:37:27.314313] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:04.123 [2024-05-14 23:37:27.314382] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:04.123 pt1 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.123 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.382 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:04.382 "name": "raid_bdev1", 00:22:04.382 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:04.382 "strip_size_kb": 0, 00:22:04.382 "state": "configuring", 00:22:04.382 "raid_level": "raid1", 00:22:04.382 "superblock": true, 00:22:04.382 "num_base_bdevs": 4, 00:22:04.382 "num_base_bdevs_discovered": 1, 00:22:04.382 "num_base_bdevs_operational": 4, 00:22:04.382 "base_bdevs_list": [ 00:22:04.382 { 00:22:04.382 "name": "pt1", 00:22:04.382 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:22:04.382 "is_configured": true, 00:22:04.382 "data_offset": 2048, 00:22:04.382 "data_size": 63488 00:22:04.382 }, 00:22:04.382 { 00:22:04.382 "name": null, 00:22:04.382 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:04.382 "is_configured": false, 00:22:04.382 "data_offset": 2048, 00:22:04.382 "data_size": 63488 00:22:04.382 }, 00:22:04.382 { 00:22:04.382 "name": null, 00:22:04.382 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:04.382 "is_configured": false, 00:22:04.382 "data_offset": 2048, 00:22:04.382 "data_size": 63488 00:22:04.382 }, 00:22:04.382 { 00:22:04.382 "name": null, 00:22:04.382 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:04.382 "is_configured": false, 00:22:04.382 "data_offset": 2048, 00:22:04.382 "data_size": 63488 00:22:04.382 } 00:22:04.382 ] 00:22:04.382 }' 00:22:04.382 23:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:04.382 23:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.950 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:04.950 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:05.209 [2024-05-14 23:37:28.448248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:05.209 [2024-05-14 23:37:28.448414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.209 [2024-05-14 23:37:28.448498] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:22:05.209 [2024-05-14 23:37:28.448530] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.209 [2024-05-14 23:37:28.449068] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.209 [2024-05-14 23:37:28.449130] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:05.209 [2024-05-14 23:37:28.449643] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:05.209 [2024-05-14 23:37:28.449687] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:05.209 pt2 00:22:05.209 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:05.468 [2024-05-14 23:37:28.676324] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.468 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.727 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.727 "name": "raid_bdev1", 00:22:05.727 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:05.727 "strip_size_kb": 0, 00:22:05.727 "state": "configuring", 00:22:05.727 "raid_level": "raid1", 00:22:05.727 "superblock": true, 00:22:05.727 "num_base_bdevs": 4, 00:22:05.727 "num_base_bdevs_discovered": 1, 00:22:05.727 "num_base_bdevs_operational": 4, 00:22:05.727 "base_bdevs_list": [ 00:22:05.727 { 00:22:05.727 "name": "pt1", 00:22:05.727 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:22:05.727 "is_configured": true, 00:22:05.727 "data_offset": 2048, 00:22:05.727 "data_size": 63488 00:22:05.727 }, 00:22:05.727 { 00:22:05.727 "name": null, 00:22:05.727 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:05.727 "is_configured": false, 00:22:05.727 "data_offset": 2048, 00:22:05.727 "data_size": 63488 00:22:05.727 }, 00:22:05.727 { 00:22:05.727 "name": null, 00:22:05.727 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:05.727 "is_configured": false, 00:22:05.727 "data_offset": 2048, 00:22:05.727 "data_size": 63488 00:22:05.727 }, 00:22:05.727 { 00:22:05.727 "name": null, 00:22:05.727 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:05.727 "is_configured": false, 00:22:05.727 "data_offset": 2048, 00:22:05.727 "data_size": 63488 00:22:05.727 } 00:22:05.727 ] 00:22:05.727 }' 00:22:05.727 23:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.727 23:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.315 23:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:06.315 23:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:06.315 23:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:06.580 [2024-05-14 23:37:29.796387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:06.580 [2024-05-14 23:37:29.796505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.580 [2024-05-14 23:37:29.796558] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032780 00:22:06.580 [2024-05-14 23:37:29.796583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.580 [2024-05-14 23:37:29.797023] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.580 [2024-05-14 23:37:29.797075] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:06.580 [2024-05-14 23:37:29.797389] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:06.580 [2024-05-14 23:37:29.797440] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:06.580 pt2 00:22:06.580 23:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:06.580 23:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:06.580 23:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:06.839 [2024-05-14 23:37:30.004409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:06.839 [2024-05-14 23:37:30.004519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.839 [2024-05-14 23:37:30.004580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033c80 00:22:06.839 [2024-05-14 23:37:30.004614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.839 [2024-05-14 23:37:30.005000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.839 [2024-05-14 23:37:30.005062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:06.839 [2024-05-14 23:37:30.005348] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:06.839 [2024-05-14 23:37:30.005409] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:06.839 pt3 00:22:06.839 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:06.839 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:06.839 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:07.098 [2024-05-14 23:37:30.200444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:07.098 [2024-05-14 23:37:30.200548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.098 [2024-05-14 23:37:30.200594] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:22:07.098 [2024-05-14 23:37:30.200630] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.098 [2024-05-14 23:37:30.201037] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.098 [2024-05-14 23:37:30.201086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:07.098 [2024-05-14 23:37:30.201498] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:07.098 [2024-05-14 23:37:30.201537] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:07.098 [2024-05-14 23:37:30.201644] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:22:07.098 [2024-05-14 23:37:30.201657] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:07.098 [2024-05-14 23:37:30.201735] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:07.098 [2024-05-14 23:37:30.201965] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:22:07.098 [2024-05-14 23:37:30.201981] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:22:07.099 [2024-05-14 23:37:30.202086] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.099 pt4 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.099 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.357 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.357 "name": "raid_bdev1", 00:22:07.357 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:07.357 "strip_size_kb": 0, 00:22:07.357 "state": "online", 00:22:07.357 "raid_level": "raid1", 00:22:07.357 "superblock": true, 00:22:07.357 "num_base_bdevs": 4, 00:22:07.357 "num_base_bdevs_discovered": 4, 00:22:07.357 "num_base_bdevs_operational": 4, 00:22:07.357 "base_bdevs_list": [ 00:22:07.357 { 00:22:07.357 "name": "pt1", 00:22:07.357 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:22:07.357 "is_configured": true, 00:22:07.357 "data_offset": 2048, 00:22:07.357 "data_size": 63488 00:22:07.357 }, 00:22:07.357 { 00:22:07.357 "name": "pt2", 00:22:07.357 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:07.357 "is_configured": true, 00:22:07.357 "data_offset": 2048, 00:22:07.357 "data_size": 63488 00:22:07.357 }, 00:22:07.357 { 00:22:07.357 "name": "pt3", 00:22:07.357 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:07.357 "is_configured": true, 00:22:07.357 "data_offset": 2048, 00:22:07.357 "data_size": 63488 00:22:07.357 }, 00:22:07.357 { 00:22:07.357 "name": "pt4", 00:22:07.357 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:07.357 "is_configured": true, 00:22:07.357 "data_offset": 2048, 00:22:07.357 "data_size": 63488 00:22:07.357 } 00:22:07.357 ] 00:22:07.357 }' 00:22:07.357 23:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.357 23:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.924 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:07.924 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:22:07.924 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:07.924 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:07.924 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:07.924 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:22:07.924 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:07.924 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:08.182 [2024-05-14 23:37:31.300776] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.182 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:08.182 "name": "raid_bdev1", 00:22:08.182 "aliases": [ 00:22:08.182 "2580db67-7398-4a86-8377-32ba9259d5a7" 00:22:08.182 ], 00:22:08.182 "product_name": "Raid Volume", 00:22:08.182 "block_size": 512, 00:22:08.182 "num_blocks": 63488, 00:22:08.182 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:08.182 "assigned_rate_limits": { 00:22:08.182 "rw_ios_per_sec": 0, 00:22:08.182 "rw_mbytes_per_sec": 0, 00:22:08.182 "r_mbytes_per_sec": 0, 00:22:08.182 "w_mbytes_per_sec": 0 00:22:08.182 }, 00:22:08.182 "claimed": false, 00:22:08.182 "zoned": false, 00:22:08.182 "supported_io_types": { 00:22:08.182 "read": true, 00:22:08.183 "write": true, 00:22:08.183 "unmap": false, 00:22:08.183 "write_zeroes": true, 00:22:08.183 "flush": false, 00:22:08.183 "reset": true, 00:22:08.183 "compare": false, 00:22:08.183 "compare_and_write": false, 00:22:08.183 "abort": false, 00:22:08.183 "nvme_admin": false, 00:22:08.183 "nvme_io": false 00:22:08.183 }, 00:22:08.183 "memory_domains": [ 00:22:08.183 { 00:22:08.183 "dma_device_id": "system", 00:22:08.183 "dma_device_type": 1 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.183 "dma_device_type": 2 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "dma_device_id": "system", 00:22:08.183 "dma_device_type": 1 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.183 "dma_device_type": 2 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "dma_device_id": "system", 00:22:08.183 "dma_device_type": 1 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.183 "dma_device_type": 2 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "dma_device_id": "system", 00:22:08.183 "dma_device_type": 1 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.183 "dma_device_type": 2 00:22:08.183 } 00:22:08.183 ], 00:22:08.183 "driver_specific": { 00:22:08.183 "raid": { 00:22:08.183 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:08.183 "strip_size_kb": 0, 00:22:08.183 "state": "online", 00:22:08.183 "raid_level": "raid1", 00:22:08.183 "superblock": true, 00:22:08.183 "num_base_bdevs": 4, 00:22:08.183 "num_base_bdevs_discovered": 4, 00:22:08.183 "num_base_bdevs_operational": 4, 00:22:08.183 "base_bdevs_list": [ 00:22:08.183 { 00:22:08.183 "name": "pt1", 00:22:08.183 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:22:08.183 "is_configured": true, 00:22:08.183 "data_offset": 2048, 00:22:08.183 "data_size": 63488 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "name": "pt2", 00:22:08.183 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:08.183 "is_configured": true, 00:22:08.183 "data_offset": 2048, 00:22:08.183 "data_size": 63488 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "name": "pt3", 00:22:08.183 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:08.183 "is_configured": true, 00:22:08.183 "data_offset": 2048, 00:22:08.183 "data_size": 63488 00:22:08.183 }, 00:22:08.183 { 00:22:08.183 "name": "pt4", 00:22:08.183 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:08.183 "is_configured": true, 00:22:08.183 "data_offset": 2048, 00:22:08.183 "data_size": 63488 00:22:08.183 } 00:22:08.183 ] 00:22:08.183 } 00:22:08.183 } 00:22:08.183 }' 00:22:08.183 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:08.183 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:22:08.183 pt2 00:22:08.183 pt3 00:22:08.183 pt4' 00:22:08.183 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:08.183 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:08.183 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:08.441 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:08.441 "name": "pt1", 00:22:08.441 "aliases": [ 00:22:08.441 "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff" 00:22:08.441 ], 00:22:08.441 "product_name": "passthru", 00:22:08.441 "block_size": 512, 00:22:08.441 "num_blocks": 65536, 00:22:08.441 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:22:08.441 "assigned_rate_limits": { 00:22:08.441 "rw_ios_per_sec": 0, 00:22:08.441 "rw_mbytes_per_sec": 0, 00:22:08.441 "r_mbytes_per_sec": 0, 00:22:08.441 "w_mbytes_per_sec": 0 00:22:08.441 }, 00:22:08.441 "claimed": true, 00:22:08.441 "claim_type": "exclusive_write", 00:22:08.441 "zoned": false, 00:22:08.441 "supported_io_types": { 00:22:08.441 "read": true, 00:22:08.441 "write": true, 00:22:08.441 "unmap": true, 00:22:08.441 "write_zeroes": true, 00:22:08.441 "flush": true, 00:22:08.441 "reset": true, 00:22:08.441 "compare": false, 00:22:08.441 "compare_and_write": false, 00:22:08.441 "abort": true, 00:22:08.441 "nvme_admin": false, 00:22:08.441 "nvme_io": false 00:22:08.441 }, 00:22:08.441 "memory_domains": [ 00:22:08.441 { 00:22:08.441 "dma_device_id": "system", 00:22:08.441 "dma_device_type": 1 00:22:08.441 }, 00:22:08.441 { 00:22:08.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.441 "dma_device_type": 2 00:22:08.441 } 00:22:08.441 ], 00:22:08.441 "driver_specific": { 00:22:08.441 "passthru": { 00:22:08.441 "name": "pt1", 00:22:08.441 "base_bdev_name": "malloc1" 00:22:08.441 } 00:22:08.441 } 00:22:08.441 }' 00:22:08.441 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:08.441 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:08.699 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:08.699 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:08.699 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:08.699 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:08.699 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:08.699 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:08.699 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:08.699 23:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:08.957 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:08.957 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:08.957 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:08.957 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:08.957 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:09.215 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:09.215 "name": "pt2", 00:22:09.215 "aliases": [ 00:22:09.215 "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935" 00:22:09.215 ], 00:22:09.215 "product_name": "passthru", 00:22:09.215 "block_size": 512, 00:22:09.215 "num_blocks": 65536, 00:22:09.215 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:09.215 "assigned_rate_limits": { 00:22:09.215 "rw_ios_per_sec": 0, 00:22:09.215 "rw_mbytes_per_sec": 0, 00:22:09.215 "r_mbytes_per_sec": 0, 00:22:09.215 "w_mbytes_per_sec": 0 00:22:09.215 }, 00:22:09.215 "claimed": true, 00:22:09.215 "claim_type": "exclusive_write", 00:22:09.215 "zoned": false, 00:22:09.215 "supported_io_types": { 00:22:09.215 "read": true, 00:22:09.215 "write": true, 00:22:09.215 "unmap": true, 00:22:09.215 "write_zeroes": true, 00:22:09.215 "flush": true, 00:22:09.215 "reset": true, 00:22:09.215 "compare": false, 00:22:09.215 "compare_and_write": false, 00:22:09.215 "abort": true, 00:22:09.215 "nvme_admin": false, 00:22:09.215 "nvme_io": false 00:22:09.215 }, 00:22:09.215 "memory_domains": [ 00:22:09.215 { 00:22:09.215 "dma_device_id": "system", 00:22:09.215 "dma_device_type": 1 00:22:09.215 }, 00:22:09.215 { 00:22:09.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.215 "dma_device_type": 2 00:22:09.215 } 00:22:09.215 ], 00:22:09.215 "driver_specific": { 00:22:09.215 "passthru": { 00:22:09.215 "name": "pt2", 00:22:09.215 "base_bdev_name": "malloc2" 00:22:09.215 } 00:22:09.215 } 00:22:09.215 }' 00:22:09.215 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:09.215 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:09.215 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:09.215 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:09.215 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:09.473 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:09.732 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:09.732 "name": "pt3", 00:22:09.732 "aliases": [ 00:22:09.732 "c627baf6-7e5c-5df1-a2d3-feff114c0198" 00:22:09.732 ], 00:22:09.732 "product_name": "passthru", 00:22:09.732 "block_size": 512, 00:22:09.732 "num_blocks": 65536, 00:22:09.732 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:09.732 "assigned_rate_limits": { 00:22:09.732 "rw_ios_per_sec": 0, 00:22:09.732 "rw_mbytes_per_sec": 0, 00:22:09.732 "r_mbytes_per_sec": 0, 00:22:09.732 "w_mbytes_per_sec": 0 00:22:09.732 }, 00:22:09.732 "claimed": true, 00:22:09.733 "claim_type": "exclusive_write", 00:22:09.733 "zoned": false, 00:22:09.733 "supported_io_types": { 00:22:09.733 "read": true, 00:22:09.733 "write": true, 00:22:09.733 "unmap": true, 00:22:09.733 "write_zeroes": true, 00:22:09.733 "flush": true, 00:22:09.733 "reset": true, 00:22:09.733 "compare": false, 00:22:09.733 "compare_and_write": false, 00:22:09.733 "abort": true, 00:22:09.733 "nvme_admin": false, 00:22:09.733 "nvme_io": false 00:22:09.733 }, 00:22:09.733 "memory_domains": [ 00:22:09.733 { 00:22:09.733 "dma_device_id": "system", 00:22:09.733 "dma_device_type": 1 00:22:09.733 }, 00:22:09.733 { 00:22:09.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.733 "dma_device_type": 2 00:22:09.733 } 00:22:09.733 ], 00:22:09.733 "driver_specific": { 00:22:09.733 "passthru": { 00:22:09.733 "name": "pt3", 00:22:09.733 "base_bdev_name": "malloc3" 00:22:09.733 } 00:22:09.733 } 00:22:09.733 }' 00:22:09.733 23:37:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:09.733 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:09.991 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:09.991 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:09.991 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:09.991 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:09.991 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:09.991 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:10.250 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.250 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:10.250 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:10.250 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:10.250 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:10.250 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:10.250 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:10.507 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:10.507 "name": "pt4", 00:22:10.507 "aliases": [ 00:22:10.507 "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a" 00:22:10.507 ], 00:22:10.507 "product_name": "passthru", 00:22:10.508 "block_size": 512, 00:22:10.508 "num_blocks": 65536, 00:22:10.508 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:10.508 "assigned_rate_limits": { 00:22:10.508 "rw_ios_per_sec": 0, 00:22:10.508 "rw_mbytes_per_sec": 0, 00:22:10.508 "r_mbytes_per_sec": 0, 00:22:10.508 "w_mbytes_per_sec": 0 00:22:10.508 }, 00:22:10.508 "claimed": true, 00:22:10.508 "claim_type": "exclusive_write", 00:22:10.508 "zoned": false, 00:22:10.508 "supported_io_types": { 00:22:10.508 "read": true, 00:22:10.508 "write": true, 00:22:10.508 "unmap": true, 00:22:10.508 "write_zeroes": true, 00:22:10.508 "flush": true, 00:22:10.508 "reset": true, 00:22:10.508 "compare": false, 00:22:10.508 "compare_and_write": false, 00:22:10.508 "abort": true, 00:22:10.508 "nvme_admin": false, 00:22:10.508 "nvme_io": false 00:22:10.508 }, 00:22:10.508 "memory_domains": [ 00:22:10.508 { 00:22:10.508 "dma_device_id": "system", 00:22:10.508 "dma_device_type": 1 00:22:10.508 }, 00:22:10.508 { 00:22:10.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.508 "dma_device_type": 2 00:22:10.508 } 00:22:10.508 ], 00:22:10.508 "driver_specific": { 00:22:10.508 "passthru": { 00:22:10.508 "name": "pt4", 00:22:10.508 "base_bdev_name": "malloc4" 00:22:10.508 } 00:22:10.508 } 00:22:10.508 }' 00:22:10.508 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:10.508 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:10.508 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:10.508 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:10.508 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:10.767 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.767 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:10.767 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:10.767 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.767 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:10.767 23:37:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:10.767 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:10.767 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:10.767 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:11.026 [2024-05-14 23:37:34.277252] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.026 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2580db67-7398-4a86-8377-32ba9259d5a7 '!=' 2580db67-7398-4a86-8377-32ba9259d5a7 ']' 00:22:11.026 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:11.026 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:11.026 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:22:11.026 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:11.285 [2024-05-14 23:37:34.533089] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.285 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.544 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.544 "name": "raid_bdev1", 00:22:11.544 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:11.544 "strip_size_kb": 0, 00:22:11.544 "state": "online", 00:22:11.544 "raid_level": "raid1", 00:22:11.544 "superblock": true, 00:22:11.544 "num_base_bdevs": 4, 00:22:11.544 "num_base_bdevs_discovered": 3, 00:22:11.544 "num_base_bdevs_operational": 3, 00:22:11.544 "base_bdevs_list": [ 00:22:11.544 { 00:22:11.544 "name": null, 00:22:11.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.544 "is_configured": false, 00:22:11.544 "data_offset": 2048, 00:22:11.544 "data_size": 63488 00:22:11.544 }, 00:22:11.544 { 00:22:11.544 "name": "pt2", 00:22:11.544 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:11.544 "is_configured": true, 00:22:11.544 "data_offset": 2048, 00:22:11.544 "data_size": 63488 00:22:11.544 }, 00:22:11.544 { 00:22:11.544 "name": "pt3", 00:22:11.544 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:11.544 "is_configured": true, 00:22:11.544 "data_offset": 2048, 00:22:11.544 "data_size": 63488 00:22:11.544 }, 00:22:11.544 { 00:22:11.544 "name": "pt4", 00:22:11.544 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:11.544 "is_configured": true, 00:22:11.544 "data_offset": 2048, 00:22:11.544 "data_size": 63488 00:22:11.544 } 00:22:11.544 ] 00:22:11.544 }' 00:22:11.544 23:37:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.544 23:37:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.480 23:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:12.480 [2024-05-14 23:37:35.757282] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:12.480 [2024-05-14 23:37:35.757336] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.480 [2024-05-14 23:37:35.757405] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.480 [2024-05-14 23:37:35.757458] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.480 [2024-05-14 23:37:35.757479] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:22:12.738 23:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.738 23:37:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:12.997 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:12.997 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:12.997 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:12.997 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:12.997 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:12.997 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:12.997 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:12.997 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:13.255 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:13.255 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:13.255 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:13.513 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:13.513 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:13.513 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:13.513 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:13.513 23:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:13.772 [2024-05-14 23:37:36.989524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:13.772 [2024-05-14 23:37:36.989647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.772 [2024-05-14 23:37:36.989690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036680 00:22:13.772 [2024-05-14 23:37:36.989720] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.772 [2024-05-14 23:37:36.991752] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.772 [2024-05-14 23:37:36.991814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:13.772 [2024-05-14 23:37:36.991927] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:13.772 [2024-05-14 23:37:36.992008] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:13.772 pt2 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.772 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.031 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.031 "name": "raid_bdev1", 00:22:14.031 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:14.031 "strip_size_kb": 0, 00:22:14.031 "state": "configuring", 00:22:14.031 "raid_level": "raid1", 00:22:14.031 "superblock": true, 00:22:14.031 "num_base_bdevs": 4, 00:22:14.031 "num_base_bdevs_discovered": 1, 00:22:14.031 "num_base_bdevs_operational": 3, 00:22:14.031 "base_bdevs_list": [ 00:22:14.031 { 00:22:14.031 "name": null, 00:22:14.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.031 "is_configured": false, 00:22:14.031 "data_offset": 2048, 00:22:14.031 "data_size": 63488 00:22:14.031 }, 00:22:14.031 { 00:22:14.031 "name": "pt2", 00:22:14.031 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:14.031 "is_configured": true, 00:22:14.031 "data_offset": 2048, 00:22:14.031 "data_size": 63488 00:22:14.031 }, 00:22:14.031 { 00:22:14.031 "name": null, 00:22:14.031 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:14.031 "is_configured": false, 00:22:14.031 "data_offset": 2048, 00:22:14.031 "data_size": 63488 00:22:14.031 }, 00:22:14.031 { 00:22:14.031 "name": null, 00:22:14.031 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:14.031 "is_configured": false, 00:22:14.031 "data_offset": 2048, 00:22:14.031 "data_size": 63488 00:22:14.031 } 00:22:14.031 ] 00:22:14.031 }' 00:22:14.031 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.031 23:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.965 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:14.965 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:14.965 23:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:14.965 [2024-05-14 23:37:38.109761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:14.965 [2024-05-14 23:37:38.109874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.965 [2024-05-14 23:37:38.109927] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000037e80 00:22:14.965 [2024-05-14 23:37:38.109958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.965 [2024-05-14 23:37:38.110526] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.965 [2024-05-14 23:37:38.110577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:14.965 [2024-05-14 23:37:38.110684] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:14.965 [2024-05-14 23:37:38.110712] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:14.965 pt3 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.965 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.225 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:15.225 "name": "raid_bdev1", 00:22:15.225 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:15.225 "strip_size_kb": 0, 00:22:15.225 "state": "configuring", 00:22:15.225 "raid_level": "raid1", 00:22:15.225 "superblock": true, 00:22:15.225 "num_base_bdevs": 4, 00:22:15.225 "num_base_bdevs_discovered": 2, 00:22:15.225 "num_base_bdevs_operational": 3, 00:22:15.225 "base_bdevs_list": [ 00:22:15.225 { 00:22:15.225 "name": null, 00:22:15.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.225 "is_configured": false, 00:22:15.225 "data_offset": 2048, 00:22:15.225 "data_size": 63488 00:22:15.225 }, 00:22:15.225 { 00:22:15.225 "name": "pt2", 00:22:15.225 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:15.225 "is_configured": true, 00:22:15.225 "data_offset": 2048, 00:22:15.225 "data_size": 63488 00:22:15.225 }, 00:22:15.225 { 00:22:15.225 "name": "pt3", 00:22:15.225 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:15.225 "is_configured": true, 00:22:15.225 "data_offset": 2048, 00:22:15.225 "data_size": 63488 00:22:15.225 }, 00:22:15.225 { 00:22:15.225 "name": null, 00:22:15.225 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:15.225 "is_configured": false, 00:22:15.225 "data_offset": 2048, 00:22:15.225 "data_size": 63488 00:22:15.225 } 00:22:15.225 ] 00:22:15.225 }' 00:22:15.225 23:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:15.225 23:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.792 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:15.792 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:15.793 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:22:15.793 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:16.051 [2024-05-14 23:37:39.281984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:16.051 [2024-05-14 23:37:39.282122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.051 [2024-05-14 23:37:39.282381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000039380 00:22:16.051 [2024-05-14 23:37:39.282421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.051 [2024-05-14 23:37:39.282837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.051 [2024-05-14 23:37:39.282873] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:16.051 [2024-05-14 23:37:39.282965] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:16.051 [2024-05-14 23:37:39.282993] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:16.052 [2024-05-14 23:37:39.283096] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:22:16.052 [2024-05-14 23:37:39.283110] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:16.052 [2024-05-14 23:37:39.283205] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:16.052 [2024-05-14 23:37:39.283424] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:22:16.052 [2024-05-14 23:37:39.283439] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:22:16.052 [2024-05-14 23:37:39.283537] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.052 pt4 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.052 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.310 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:16.310 "name": "raid_bdev1", 00:22:16.310 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:16.310 "strip_size_kb": 0, 00:22:16.310 "state": "online", 00:22:16.310 "raid_level": "raid1", 00:22:16.310 "superblock": true, 00:22:16.310 "num_base_bdevs": 4, 00:22:16.310 "num_base_bdevs_discovered": 3, 00:22:16.310 "num_base_bdevs_operational": 3, 00:22:16.310 "base_bdevs_list": [ 00:22:16.310 { 00:22:16.310 "name": null, 00:22:16.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.310 "is_configured": false, 00:22:16.310 "data_offset": 2048, 00:22:16.311 "data_size": 63488 00:22:16.311 }, 00:22:16.311 { 00:22:16.311 "name": "pt2", 00:22:16.311 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:16.311 "is_configured": true, 00:22:16.311 "data_offset": 2048, 00:22:16.311 "data_size": 63488 00:22:16.311 }, 00:22:16.311 { 00:22:16.311 "name": "pt3", 00:22:16.311 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:16.311 "is_configured": true, 00:22:16.311 "data_offset": 2048, 00:22:16.311 "data_size": 63488 00:22:16.311 }, 00:22:16.311 { 00:22:16.311 "name": "pt4", 00:22:16.311 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:16.311 "is_configured": true, 00:22:16.311 "data_offset": 2048, 00:22:16.311 "data_size": 63488 00:22:16.311 } 00:22:16.311 ] 00:22:16.311 }' 00:22:16.311 23:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:16.311 23:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.275 23:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # '[' 4 -gt 2 ']' 00:22:17.275 23:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:17.275 [2024-05-14 23:37:40.530265] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.275 [2024-05-14 23:37:40.530320] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.275 [2024-05-14 23:37:40.530408] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.275 [2024-05-14 23:37:40.530478] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.275 [2024-05-14 23:37:40.530494] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:22:17.275 23:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.275 23:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # jq -r '.[]' 00:22:17.534 23:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # raid_bdev= 00:22:17.534 23:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@529 -- # '[' -n '' ']' 00:22:17.534 23:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.793 [2024-05-14 23:37:41.042364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.793 [2024-05-14 23:37:41.042485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.793 [2024-05-14 23:37:41.042544] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003a880 00:22:17.793 [2024-05-14 23:37:41.042578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.793 [2024-05-14 23:37:41.045006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.793 [2024-05-14 23:37:41.045094] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.793 [2024-05-14 23:37:41.045249] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:17.794 [2024-05-14 23:37:41.045345] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.794 pt1 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.794 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.372 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.372 "name": "raid_bdev1", 00:22:18.372 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:18.372 "strip_size_kb": 0, 00:22:18.372 "state": "configuring", 00:22:18.372 "raid_level": "raid1", 00:22:18.372 "superblock": true, 00:22:18.372 "num_base_bdevs": 4, 00:22:18.372 "num_base_bdevs_discovered": 1, 00:22:18.372 "num_base_bdevs_operational": 4, 00:22:18.372 "base_bdevs_list": [ 00:22:18.372 { 00:22:18.372 "name": "pt1", 00:22:18.372 "uuid": "1f67e184-2aae-5c2a-9c6b-1ee784cbd7ff", 00:22:18.372 "is_configured": true, 00:22:18.372 "data_offset": 2048, 00:22:18.372 "data_size": 63488 00:22:18.372 }, 00:22:18.372 { 00:22:18.372 "name": null, 00:22:18.372 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:18.372 "is_configured": false, 00:22:18.372 "data_offset": 2048, 00:22:18.372 "data_size": 63488 00:22:18.372 }, 00:22:18.372 { 00:22:18.372 "name": null, 00:22:18.372 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:18.372 "is_configured": false, 00:22:18.372 "data_offset": 2048, 00:22:18.372 "data_size": 63488 00:22:18.372 }, 00:22:18.372 { 00:22:18.372 "name": null, 00:22:18.372 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:18.372 "is_configured": false, 00:22:18.372 "data_offset": 2048, 00:22:18.372 "data_size": 63488 00:22:18.372 } 00:22:18.372 ] 00:22:18.372 }' 00:22:18.372 23:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.372 23:37:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.938 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i = 1 )) 00:22:18.938 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:22:18.938 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:19.197 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:22:19.197 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:22:19.197 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:19.456 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:22:19.456 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:22:19.456 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i++ )) 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # (( i < num_base_bdevs )) 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # i=3 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:19.714 [2024-05-14 23:37:42.974609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:19.714 [2024-05-14 23:37:42.974727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.714 [2024-05-14 23:37:42.974798] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003c080 00:22:19.714 [2024-05-14 23:37:42.974832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.714 [2024-05-14 23:37:42.975283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.714 [2024-05-14 23:37:42.975711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:19.714 [2024-05-14 23:37:42.975828] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:19.714 [2024-05-14 23:37:42.975846] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:19.714 [2024-05-14 23:37:42.975855] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:19.714 [2024-05-14 23:37:42.975876] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state configuring 00:22:19.714 [2024-05-14 23:37:42.975951] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:19.714 pt4 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@551 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.714 23:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.973 23:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:19.973 "name": "raid_bdev1", 00:22:19.973 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:19.973 "strip_size_kb": 0, 00:22:19.973 "state": "configuring", 00:22:19.973 "raid_level": "raid1", 00:22:19.973 "superblock": true, 00:22:19.973 "num_base_bdevs": 4, 00:22:19.973 "num_base_bdevs_discovered": 1, 00:22:19.973 "num_base_bdevs_operational": 3, 00:22:19.973 "base_bdevs_list": [ 00:22:19.973 { 00:22:19.973 "name": null, 00:22:19.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.973 "is_configured": false, 00:22:19.973 "data_offset": 2048, 00:22:19.973 "data_size": 63488 00:22:19.973 }, 00:22:19.973 { 00:22:19.973 "name": null, 00:22:19.973 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:19.973 "is_configured": false, 00:22:19.973 "data_offset": 2048, 00:22:19.973 "data_size": 63488 00:22:19.973 }, 00:22:19.973 { 00:22:19.973 "name": null, 00:22:19.973 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:19.973 "is_configured": false, 00:22:19.973 "data_offset": 2048, 00:22:19.973 "data_size": 63488 00:22:19.973 }, 00:22:19.973 { 00:22:19.973 "name": "pt4", 00:22:19.973 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:19.973 "is_configured": true, 00:22:19.973 "data_offset": 2048, 00:22:19.973 "data_size": 63488 00:22:19.973 } 00:22:19.973 ] 00:22:19.973 }' 00:22:19.973 23:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:19.973 23:37:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.910 23:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i = 1 )) 00:22:20.910 23:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:22:20.910 23:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.910 [2024-05-14 23:37:44.098768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.910 [2024-05-14 23:37:44.098892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.910 [2024-05-14 23:37:44.098943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003d580 00:22:20.910 [2024-05-14 23:37:44.098978] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.910 [2024-05-14 23:37:44.099620] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.910 [2024-05-14 23:37:44.099678] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.910 [2024-05-14 23:37:44.099767] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:20.910 [2024-05-14 23:37:44.099793] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.910 pt2 00:22:20.910 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:22:20.910 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:22:20.910 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:21.182 [2024-05-14 23:37:44.302842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:21.182 [2024-05-14 23:37:44.302945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.182 [2024-05-14 23:37:44.302990] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003ea80 00:22:21.182 [2024-05-14 23:37:44.303030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.182 [2024-05-14 23:37:44.303703] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.182 [2024-05-14 23:37:44.303761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:21.182 [2024-05-14 23:37:44.303856] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:21.182 [2024-05-14 23:37:44.303893] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:21.182 [2024-05-14 23:37:44.303986] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012300 00:22:21.182 [2024-05-14 23:37:44.303999] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:21.182 [2024-05-14 23:37:44.304077] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:22:21.182 [2024-05-14 23:37:44.304311] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012300 00:22:21.182 [2024-05-14 23:37:44.304331] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012300 00:22:21.182 [2024-05-14 23:37:44.304431] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.182 pt3 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i++ )) 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # (( i < num_base_bdevs - 1 )) 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@559 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.182 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.440 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:21.440 "name": "raid_bdev1", 00:22:21.440 "uuid": "2580db67-7398-4a86-8377-32ba9259d5a7", 00:22:21.440 "strip_size_kb": 0, 00:22:21.440 "state": "online", 00:22:21.440 "raid_level": "raid1", 00:22:21.440 "superblock": true, 00:22:21.440 "num_base_bdevs": 4, 00:22:21.440 "num_base_bdevs_discovered": 3, 00:22:21.440 "num_base_bdevs_operational": 3, 00:22:21.440 "base_bdevs_list": [ 00:22:21.440 { 00:22:21.440 "name": null, 00:22:21.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.440 "is_configured": false, 00:22:21.440 "data_offset": 2048, 00:22:21.440 "data_size": 63488 00:22:21.440 }, 00:22:21.440 { 00:22:21.440 "name": "pt2", 00:22:21.440 "uuid": "d6a9e2d7-8e3d-5f5b-b6dc-e310e6459935", 00:22:21.440 "is_configured": true, 00:22:21.440 "data_offset": 2048, 00:22:21.440 "data_size": 63488 00:22:21.440 }, 00:22:21.440 { 00:22:21.440 "name": "pt3", 00:22:21.440 "uuid": "c627baf6-7e5c-5df1-a2d3-feff114c0198", 00:22:21.440 "is_configured": true, 00:22:21.441 "data_offset": 2048, 00:22:21.441 "data_size": 63488 00:22:21.441 }, 00:22:21.441 { 00:22:21.441 "name": "pt4", 00:22:21.441 "uuid": "ee46ea3e-b66b-5826-8d9d-9831c27c8e2a", 00:22:21.441 "is_configured": true, 00:22:21.441 "data_offset": 2048, 00:22:21.441 "data_size": 63488 00:22:21.441 } 00:22:21.441 ] 00:22:21.441 }' 00:22:21.441 23:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:21.441 23:37:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:22:22.375 [2024-05-14 23:37:45.511195] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # '[' 2580db67-7398-4a86-8377-32ba9259d5a7 '!=' 2580db67-7398-4a86-8377-32ba9259d5a7 ']' 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@568 -- # killprocess 71999 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 71999 ']' 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 71999 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71999 00:22:22.375 killing process with pid 71999 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71999' 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 71999 00:22:22.375 23:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 71999 00:22:22.375 [2024-05-14 23:37:45.552616] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:22.375 [2024-05-14 23:37:45.552727] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.375 [2024-05-14 23:37:45.552811] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.375 [2024-05-14 23:37:45.552830] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012300 name raid_bdev1, state offline 00:22:22.631 [2024-05-14 23:37:45.911842] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:24.006 ************************************ 00:22:24.006 END TEST raid_superblock_test 00:22:24.006 ************************************ 00:22:24.006 23:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # return 0 00:22:24.006 00:22:24.006 real 0m29.506s 00:22:24.006 user 0m55.415s 00:22:24.006 sys 0m2.967s 00:22:24.006 23:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:24.006 23:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.006 23:37:47 bdev_raid -- bdev/bdev_raid.sh@821 -- # '[' '' = true ']' 00:22:24.006 23:37:47 bdev_raid -- bdev/bdev_raid.sh@830 -- # '[' n == y ']' 00:22:24.006 23:37:47 bdev_raid -- bdev/bdev_raid.sh@842 -- # base_blocklen=4096 00:22:24.006 23:37:47 bdev_raid -- bdev/bdev_raid.sh@844 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:22:24.006 23:37:47 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:24.006 23:37:47 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:24.006 23:37:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.006 ************************************ 00:22:24.006 START TEST raid_state_function_test_sb_4k 00:22:24.006 ************************************ 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:22:24.006 Process raid pid: 72916 00:22:24.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # raid_pid=72916 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 72916' 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@247 -- # waitforlisten 72916 /var/tmp/spdk-raid.sock 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 72916 ']' 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:24.006 23:37:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.264 [2024-05-14 23:37:47.403309] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:22:24.264 [2024-05-14 23:37:47.403519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.521 [2024-05-14 23:37:47.567238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.778 [2024-05-14 23:37:47.822930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.778 [2024-05-14 23:37:48.028140] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.036 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:25.036 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:22:25.036 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:25.292 [2024-05-14 23:37:48.463283] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:25.292 [2024-05-14 23:37:48.463375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:25.293 [2024-05-14 23:37:48.463411] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:25.293 [2024-05-14 23:37:48.463434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.293 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.551 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:25.551 "name": "Existed_Raid", 00:22:25.551 "uuid": "38a7acfd-6973-41dd-b416-ed44b3310641", 00:22:25.551 "strip_size_kb": 0, 00:22:25.551 "state": "configuring", 00:22:25.551 "raid_level": "raid1", 00:22:25.551 "superblock": true, 00:22:25.551 "num_base_bdevs": 2, 00:22:25.551 "num_base_bdevs_discovered": 0, 00:22:25.551 "num_base_bdevs_operational": 2, 00:22:25.551 "base_bdevs_list": [ 00:22:25.551 { 00:22:25.551 "name": "BaseBdev1", 00:22:25.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.551 "is_configured": false, 00:22:25.551 "data_offset": 0, 00:22:25.551 "data_size": 0 00:22:25.551 }, 00:22:25.551 { 00:22:25.551 "name": "BaseBdev2", 00:22:25.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.551 "is_configured": false, 00:22:25.551 "data_offset": 0, 00:22:25.551 "data_size": 0 00:22:25.551 } 00:22:25.551 ] 00:22:25.551 }' 00:22:25.551 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:25.551 23:37:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.119 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:26.378 [2024-05-14 23:37:49.483258] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:26.378 [2024-05-14 23:37:49.483320] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:22:26.378 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:26.636 [2024-05-14 23:37:49.723295] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:26.636 [2024-05-14 23:37:49.723396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:26.636 [2024-05-14 23:37:49.723411] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:26.636 [2024-05-14 23:37:49.723437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:26.636 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:22:26.892 [2024-05-14 23:37:49.958909] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:26.892 BaseBdev1 00:22:26.892 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:22:26.892 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:26.892 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:26.892 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:22:26.892 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:26.892 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:26.892 23:37:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:26.892 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:27.150 [ 00:22:27.150 { 00:22:27.150 "name": "BaseBdev1", 00:22:27.150 "aliases": [ 00:22:27.150 "cd94a40f-624c-40d0-b5d1-bf28acc5fadf" 00:22:27.150 ], 00:22:27.150 "product_name": "Malloc disk", 00:22:27.150 "block_size": 4096, 00:22:27.150 "num_blocks": 8192, 00:22:27.150 "uuid": "cd94a40f-624c-40d0-b5d1-bf28acc5fadf", 00:22:27.150 "assigned_rate_limits": { 00:22:27.150 "rw_ios_per_sec": 0, 00:22:27.150 "rw_mbytes_per_sec": 0, 00:22:27.150 "r_mbytes_per_sec": 0, 00:22:27.150 "w_mbytes_per_sec": 0 00:22:27.150 }, 00:22:27.150 "claimed": true, 00:22:27.150 "claim_type": "exclusive_write", 00:22:27.150 "zoned": false, 00:22:27.150 "supported_io_types": { 00:22:27.150 "read": true, 00:22:27.150 "write": true, 00:22:27.150 "unmap": true, 00:22:27.150 "write_zeroes": true, 00:22:27.150 "flush": true, 00:22:27.150 "reset": true, 00:22:27.150 "compare": false, 00:22:27.150 "compare_and_write": false, 00:22:27.150 "abort": true, 00:22:27.150 "nvme_admin": false, 00:22:27.150 "nvme_io": false 00:22:27.150 }, 00:22:27.150 "memory_domains": [ 00:22:27.150 { 00:22:27.150 "dma_device_id": "system", 00:22:27.150 "dma_device_type": 1 00:22:27.150 }, 00:22:27.150 { 00:22:27.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.150 "dma_device_type": 2 00:22:27.150 } 00:22:27.150 ], 00:22:27.150 "driver_specific": {} 00:22:27.150 } 00:22:27.150 ] 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.150 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.408 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:27.408 "name": "Existed_Raid", 00:22:27.408 "uuid": "f2c0dd24-7d4f-45c3-bfd2-60befe4e8ee5", 00:22:27.408 "strip_size_kb": 0, 00:22:27.408 "state": "configuring", 00:22:27.408 "raid_level": "raid1", 00:22:27.408 "superblock": true, 00:22:27.408 "num_base_bdevs": 2, 00:22:27.408 "num_base_bdevs_discovered": 1, 00:22:27.408 "num_base_bdevs_operational": 2, 00:22:27.408 "base_bdevs_list": [ 00:22:27.408 { 00:22:27.408 "name": "BaseBdev1", 00:22:27.408 "uuid": "cd94a40f-624c-40d0-b5d1-bf28acc5fadf", 00:22:27.408 "is_configured": true, 00:22:27.408 "data_offset": 256, 00:22:27.408 "data_size": 7936 00:22:27.408 }, 00:22:27.408 { 00:22:27.408 "name": "BaseBdev2", 00:22:27.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.408 "is_configured": false, 00:22:27.408 "data_offset": 0, 00:22:27.408 "data_size": 0 00:22:27.408 } 00:22:27.408 ] 00:22:27.408 }' 00:22:27.408 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:27.408 23:37:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:27.971 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:28.230 [2024-05-14 23:37:51.452750] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:28.230 [2024-05-14 23:37:51.452813] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:22:28.230 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:28.489 [2024-05-14 23:37:51.680830] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.489 [2024-05-14 23:37:51.682466] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:28.489 [2024-05-14 23:37:51.682517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.489 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.747 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:28.747 "name": "Existed_Raid", 00:22:28.747 "uuid": "bb2dd6e3-f196-492d-90a3-4525557a8259", 00:22:28.747 "strip_size_kb": 0, 00:22:28.747 "state": "configuring", 00:22:28.747 "raid_level": "raid1", 00:22:28.747 "superblock": true, 00:22:28.747 "num_base_bdevs": 2, 00:22:28.747 "num_base_bdevs_discovered": 1, 00:22:28.747 "num_base_bdevs_operational": 2, 00:22:28.747 "base_bdevs_list": [ 00:22:28.747 { 00:22:28.747 "name": "BaseBdev1", 00:22:28.747 "uuid": "cd94a40f-624c-40d0-b5d1-bf28acc5fadf", 00:22:28.747 "is_configured": true, 00:22:28.747 "data_offset": 256, 00:22:28.747 "data_size": 7936 00:22:28.747 }, 00:22:28.747 { 00:22:28.747 "name": "BaseBdev2", 00:22:28.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.747 "is_configured": false, 00:22:28.747 "data_offset": 0, 00:22:28.747 "data_size": 0 00:22:28.747 } 00:22:28.747 ] 00:22:28.747 }' 00:22:28.747 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:28.747 23:37:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.679 23:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:22:29.679 BaseBdev2 00:22:29.679 [2024-05-14 23:37:52.834761] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:29.679 [2024-05-14 23:37:52.834968] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:22:29.679 [2024-05-14 23:37:52.834996] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:29.679 [2024-05-14 23:37:52.835096] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:22:29.679 [2024-05-14 23:37:52.835421] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:22:29.679 [2024-05-14 23:37:52.835437] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:22:29.679 [2024-05-14 23:37:52.835560] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.679 23:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:22:29.679 23:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:29.679 23:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:29.679 23:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:22:29.679 23:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:29.679 23:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:29.679 23:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:29.937 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:30.195 [ 00:22:30.195 { 00:22:30.195 "name": "BaseBdev2", 00:22:30.195 "aliases": [ 00:22:30.195 "a31f0bb3-a592-49e4-854a-6e8078897588" 00:22:30.195 ], 00:22:30.195 "product_name": "Malloc disk", 00:22:30.195 "block_size": 4096, 00:22:30.195 "num_blocks": 8192, 00:22:30.195 "uuid": "a31f0bb3-a592-49e4-854a-6e8078897588", 00:22:30.195 "assigned_rate_limits": { 00:22:30.195 "rw_ios_per_sec": 0, 00:22:30.195 "rw_mbytes_per_sec": 0, 00:22:30.195 "r_mbytes_per_sec": 0, 00:22:30.195 "w_mbytes_per_sec": 0 00:22:30.195 }, 00:22:30.195 "claimed": true, 00:22:30.195 "claim_type": "exclusive_write", 00:22:30.195 "zoned": false, 00:22:30.195 "supported_io_types": { 00:22:30.195 "read": true, 00:22:30.195 "write": true, 00:22:30.195 "unmap": true, 00:22:30.195 "write_zeroes": true, 00:22:30.195 "flush": true, 00:22:30.195 "reset": true, 00:22:30.195 "compare": false, 00:22:30.195 "compare_and_write": false, 00:22:30.195 "abort": true, 00:22:30.195 "nvme_admin": false, 00:22:30.195 "nvme_io": false 00:22:30.195 }, 00:22:30.195 "memory_domains": [ 00:22:30.195 { 00:22:30.195 "dma_device_id": "system", 00:22:30.195 "dma_device_type": 1 00:22:30.195 }, 00:22:30.195 { 00:22:30.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.195 "dma_device_type": 2 00:22:30.195 } 00:22:30.195 ], 00:22:30.195 "driver_specific": {} 00:22:30.195 } 00:22:30.195 ] 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.195 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.453 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.453 "name": "Existed_Raid", 00:22:30.453 "uuid": "bb2dd6e3-f196-492d-90a3-4525557a8259", 00:22:30.453 "strip_size_kb": 0, 00:22:30.453 "state": "online", 00:22:30.453 "raid_level": "raid1", 00:22:30.453 "superblock": true, 00:22:30.453 "num_base_bdevs": 2, 00:22:30.453 "num_base_bdevs_discovered": 2, 00:22:30.453 "num_base_bdevs_operational": 2, 00:22:30.453 "base_bdevs_list": [ 00:22:30.453 { 00:22:30.453 "name": "BaseBdev1", 00:22:30.453 "uuid": "cd94a40f-624c-40d0-b5d1-bf28acc5fadf", 00:22:30.453 "is_configured": true, 00:22:30.453 "data_offset": 256, 00:22:30.453 "data_size": 7936 00:22:30.453 }, 00:22:30.453 { 00:22:30.453 "name": "BaseBdev2", 00:22:30.453 "uuid": "a31f0bb3-a592-49e4-854a-6e8078897588", 00:22:30.453 "is_configured": true, 00:22:30.453 "data_offset": 256, 00:22:30.453 "data_size": 7936 00:22:30.453 } 00:22:30.453 ] 00:22:30.453 }' 00:22:30.453 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.453 23:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:31.020 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:22:31.020 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:22:31.020 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:31.020 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:31.020 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:31.020 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # local name 00:22:31.020 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:31.020 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:31.278 [2024-05-14 23:37:54.399175] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:31.278 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:31.278 "name": "Existed_Raid", 00:22:31.278 "aliases": [ 00:22:31.278 "bb2dd6e3-f196-492d-90a3-4525557a8259" 00:22:31.278 ], 00:22:31.278 "product_name": "Raid Volume", 00:22:31.278 "block_size": 4096, 00:22:31.278 "num_blocks": 7936, 00:22:31.278 "uuid": "bb2dd6e3-f196-492d-90a3-4525557a8259", 00:22:31.278 "assigned_rate_limits": { 00:22:31.278 "rw_ios_per_sec": 0, 00:22:31.278 "rw_mbytes_per_sec": 0, 00:22:31.278 "r_mbytes_per_sec": 0, 00:22:31.278 "w_mbytes_per_sec": 0 00:22:31.278 }, 00:22:31.278 "claimed": false, 00:22:31.278 "zoned": false, 00:22:31.278 "supported_io_types": { 00:22:31.278 "read": true, 00:22:31.278 "write": true, 00:22:31.278 "unmap": false, 00:22:31.278 "write_zeroes": true, 00:22:31.278 "flush": false, 00:22:31.278 "reset": true, 00:22:31.278 "compare": false, 00:22:31.278 "compare_and_write": false, 00:22:31.278 "abort": false, 00:22:31.278 "nvme_admin": false, 00:22:31.278 "nvme_io": false 00:22:31.278 }, 00:22:31.278 "memory_domains": [ 00:22:31.278 { 00:22:31.278 "dma_device_id": "system", 00:22:31.278 "dma_device_type": 1 00:22:31.278 }, 00:22:31.278 { 00:22:31.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.278 "dma_device_type": 2 00:22:31.278 }, 00:22:31.278 { 00:22:31.278 "dma_device_id": "system", 00:22:31.278 "dma_device_type": 1 00:22:31.278 }, 00:22:31.278 { 00:22:31.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.278 "dma_device_type": 2 00:22:31.278 } 00:22:31.278 ], 00:22:31.278 "driver_specific": { 00:22:31.278 "raid": { 00:22:31.278 "uuid": "bb2dd6e3-f196-492d-90a3-4525557a8259", 00:22:31.278 "strip_size_kb": 0, 00:22:31.278 "state": "online", 00:22:31.278 "raid_level": "raid1", 00:22:31.278 "superblock": true, 00:22:31.278 "num_base_bdevs": 2, 00:22:31.278 "num_base_bdevs_discovered": 2, 00:22:31.278 "num_base_bdevs_operational": 2, 00:22:31.278 "base_bdevs_list": [ 00:22:31.278 { 00:22:31.278 "name": "BaseBdev1", 00:22:31.278 "uuid": "cd94a40f-624c-40d0-b5d1-bf28acc5fadf", 00:22:31.278 "is_configured": true, 00:22:31.278 "data_offset": 256, 00:22:31.278 "data_size": 7936 00:22:31.278 }, 00:22:31.278 { 00:22:31.278 "name": "BaseBdev2", 00:22:31.278 "uuid": "a31f0bb3-a592-49e4-854a-6e8078897588", 00:22:31.278 "is_configured": true, 00:22:31.278 "data_offset": 256, 00:22:31.278 "data_size": 7936 00:22:31.278 } 00:22:31.278 ] 00:22:31.278 } 00:22:31.278 } 00:22:31.278 }' 00:22:31.278 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:31.278 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:22:31.278 BaseBdev2' 00:22:31.278 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:31.278 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:31.278 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:31.536 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:31.536 "name": "BaseBdev1", 00:22:31.536 "aliases": [ 00:22:31.536 "cd94a40f-624c-40d0-b5d1-bf28acc5fadf" 00:22:31.536 ], 00:22:31.536 "product_name": "Malloc disk", 00:22:31.536 "block_size": 4096, 00:22:31.536 "num_blocks": 8192, 00:22:31.536 "uuid": "cd94a40f-624c-40d0-b5d1-bf28acc5fadf", 00:22:31.536 "assigned_rate_limits": { 00:22:31.536 "rw_ios_per_sec": 0, 00:22:31.536 "rw_mbytes_per_sec": 0, 00:22:31.536 "r_mbytes_per_sec": 0, 00:22:31.536 "w_mbytes_per_sec": 0 00:22:31.536 }, 00:22:31.536 "claimed": true, 00:22:31.536 "claim_type": "exclusive_write", 00:22:31.536 "zoned": false, 00:22:31.536 "supported_io_types": { 00:22:31.536 "read": true, 00:22:31.536 "write": true, 00:22:31.536 "unmap": true, 00:22:31.536 "write_zeroes": true, 00:22:31.536 "flush": true, 00:22:31.536 "reset": true, 00:22:31.536 "compare": false, 00:22:31.536 "compare_and_write": false, 00:22:31.536 "abort": true, 00:22:31.536 "nvme_admin": false, 00:22:31.536 "nvme_io": false 00:22:31.536 }, 00:22:31.536 "memory_domains": [ 00:22:31.536 { 00:22:31.536 "dma_device_id": "system", 00:22:31.536 "dma_device_type": 1 00:22:31.536 }, 00:22:31.536 { 00:22:31.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.536 "dma_device_type": 2 00:22:31.536 } 00:22:31.536 ], 00:22:31.536 "driver_specific": {} 00:22:31.536 }' 00:22:31.536 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:31.536 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:31.536 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:31.536 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:31.813 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:31.813 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:31.813 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:31.813 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:31.813 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:31.813 23:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:31.813 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:31.813 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:31.814 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:31.814 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:31.814 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:32.071 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:32.071 "name": "BaseBdev2", 00:22:32.071 "aliases": [ 00:22:32.071 "a31f0bb3-a592-49e4-854a-6e8078897588" 00:22:32.071 ], 00:22:32.071 "product_name": "Malloc disk", 00:22:32.071 "block_size": 4096, 00:22:32.071 "num_blocks": 8192, 00:22:32.071 "uuid": "a31f0bb3-a592-49e4-854a-6e8078897588", 00:22:32.071 "assigned_rate_limits": { 00:22:32.071 "rw_ios_per_sec": 0, 00:22:32.071 "rw_mbytes_per_sec": 0, 00:22:32.071 "r_mbytes_per_sec": 0, 00:22:32.071 "w_mbytes_per_sec": 0 00:22:32.071 }, 00:22:32.071 "claimed": true, 00:22:32.071 "claim_type": "exclusive_write", 00:22:32.071 "zoned": false, 00:22:32.071 "supported_io_types": { 00:22:32.071 "read": true, 00:22:32.071 "write": true, 00:22:32.071 "unmap": true, 00:22:32.071 "write_zeroes": true, 00:22:32.071 "flush": true, 00:22:32.071 "reset": true, 00:22:32.071 "compare": false, 00:22:32.071 "compare_and_write": false, 00:22:32.071 "abort": true, 00:22:32.071 "nvme_admin": false, 00:22:32.071 "nvme_io": false 00:22:32.071 }, 00:22:32.071 "memory_domains": [ 00:22:32.071 { 00:22:32.071 "dma_device_id": "system", 00:22:32.071 "dma_device_type": 1 00:22:32.071 }, 00:22:32.071 { 00:22:32.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.071 "dma_device_type": 2 00:22:32.071 } 00:22:32.071 ], 00:22:32.071 "driver_specific": {} 00:22:32.071 }' 00:22:32.071 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:32.330 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:32.330 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:32.330 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:32.330 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:32.330 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:32.330 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:32.330 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:32.588 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:32.588 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:32.589 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:32.589 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:32.589 23:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:32.847 [2024-05-14 23:37:55.943302] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # local expected_state 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.847 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.105 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:33.105 "name": "Existed_Raid", 00:22:33.105 "uuid": "bb2dd6e3-f196-492d-90a3-4525557a8259", 00:22:33.105 "strip_size_kb": 0, 00:22:33.105 "state": "online", 00:22:33.106 "raid_level": "raid1", 00:22:33.106 "superblock": true, 00:22:33.106 "num_base_bdevs": 2, 00:22:33.106 "num_base_bdevs_discovered": 1, 00:22:33.106 "num_base_bdevs_operational": 1, 00:22:33.106 "base_bdevs_list": [ 00:22:33.106 { 00:22:33.106 "name": null, 00:22:33.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.106 "is_configured": false, 00:22:33.106 "data_offset": 256, 00:22:33.106 "data_size": 7936 00:22:33.106 }, 00:22:33.106 { 00:22:33.106 "name": "BaseBdev2", 00:22:33.106 "uuid": "a31f0bb3-a592-49e4-854a-6e8078897588", 00:22:33.106 "is_configured": true, 00:22:33.106 "data_offset": 256, 00:22:33.106 "data_size": 7936 00:22:33.106 } 00:22:33.106 ] 00:22:33.106 }' 00:22:33.106 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:33.106 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:33.672 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:33.672 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:33.672 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.672 23:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:22:33.931 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:22:33.931 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:33.931 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:34.189 [2024-05-14 23:37:57.362990] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:34.189 [2024-05-14 23:37:57.363080] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:34.189 [2024-05-14 23:37:57.446113] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:34.189 [2024-05-14 23:37:57.446237] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:34.189 [2024-05-14 23:37:57.446255] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:22:34.189 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:34.189 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:34.189 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.189 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@342 -- # killprocess 72916 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 72916 ']' 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 72916 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72916 00:22:34.447 killing process with pid 72916 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72916' 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 72916 00:22:34.447 23:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 72916 00:22:34.447 [2024-05-14 23:37:57.683508] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:34.447 [2024-05-14 23:37:57.683624] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.824 23:37:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@344 -- # return 0 00:22:35.824 ************************************ 00:22:35.824 END TEST raid_state_function_test_sb_4k 00:22:35.824 ************************************ 00:22:35.824 00:22:35.824 real 0m11.643s 00:22:35.824 user 0m20.596s 00:22:35.824 sys 0m1.233s 00:22:35.824 23:37:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:35.824 23:37:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:35.824 23:37:58 bdev_raid -- bdev/bdev_raid.sh@845 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:22:35.824 23:37:58 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:35.824 23:37:58 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:35.824 23:37:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.824 ************************************ 00:22:35.825 START TEST raid_superblock_test_4k 00:22:35.825 ************************************ 00:22:35.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=73290 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 73290 /var/tmp/spdk-raid.sock 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 73290 ']' 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:35.825 23:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:35.825 [2024-05-14 23:37:59.090405] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:22:35.825 [2024-05-14 23:37:59.090580] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73290 ] 00:22:36.083 [2024-05-14 23:37:59.260064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.341 [2024-05-14 23:37:59.473791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.599 [2024-05-14 23:37:59.671682] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:36.858 23:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:22:37.116 malloc1 00:22:37.116 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:37.116 [2024-05-14 23:38:00.396820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:37.116 [2024-05-14 23:38:00.396935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.116 [2024-05-14 23:38:00.396993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:22:37.116 [2024-05-14 23:38:00.397037] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.116 [2024-05-14 23:38:00.399106] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.116 [2024-05-14 23:38:00.399173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:37.116 pt1 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:22:37.374 malloc2 00:22:37.374 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:37.632 [2024-05-14 23:38:00.859255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:37.632 [2024-05-14 23:38:00.859340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.632 [2024-05-14 23:38:00.859393] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:22:37.632 [2024-05-14 23:38:00.859433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.632 [2024-05-14 23:38:00.861414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.632 [2024-05-14 23:38:00.861480] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:37.632 pt2 00:22:37.632 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:37.632 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:37.632 23:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:22:37.891 [2024-05-14 23:38:01.103377] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:37.891 [2024-05-14 23:38:01.105564] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:37.891 [2024-05-14 23:38:01.105886] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:22:37.891 [2024-05-14 23:38:01.105919] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:37.891 [2024-05-14 23:38:01.106220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:22:37.891 [2024-05-14 23:38:01.106744] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:22:37.891 [2024-05-14 23:38:01.106817] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:22:37.891 [2024-05-14 23:38:01.107099] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.891 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.149 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.149 "name": "raid_bdev1", 00:22:38.149 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:38.149 "strip_size_kb": 0, 00:22:38.149 "state": "online", 00:22:38.149 "raid_level": "raid1", 00:22:38.149 "superblock": true, 00:22:38.149 "num_base_bdevs": 2, 00:22:38.149 "num_base_bdevs_discovered": 2, 00:22:38.149 "num_base_bdevs_operational": 2, 00:22:38.149 "base_bdevs_list": [ 00:22:38.149 { 00:22:38.149 "name": "pt1", 00:22:38.149 "uuid": "66fd92f3-5a24-5058-bc6c-ebf2aebff278", 00:22:38.149 "is_configured": true, 00:22:38.149 "data_offset": 256, 00:22:38.149 "data_size": 7936 00:22:38.149 }, 00:22:38.149 { 00:22:38.149 "name": "pt2", 00:22:38.149 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:38.149 "is_configured": true, 00:22:38.149 "data_offset": 256, 00:22:38.149 "data_size": 7936 00:22:38.149 } 00:22:38.149 ] 00:22:38.149 }' 00:22:38.149 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.149 23:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.716 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:38.716 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:22:38.716 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:38.716 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:38.716 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:38.716 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:22:38.716 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:38.716 23:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:38.975 [2024-05-14 23:38:02.123624] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.975 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:38.975 "name": "raid_bdev1", 00:22:38.975 "aliases": [ 00:22:38.975 "69601175-13bb-40c0-857b-1de88d181fc9" 00:22:38.975 ], 00:22:38.975 "product_name": "Raid Volume", 00:22:38.975 "block_size": 4096, 00:22:38.975 "num_blocks": 7936, 00:22:38.975 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:38.975 "assigned_rate_limits": { 00:22:38.975 "rw_ios_per_sec": 0, 00:22:38.975 "rw_mbytes_per_sec": 0, 00:22:38.975 "r_mbytes_per_sec": 0, 00:22:38.975 "w_mbytes_per_sec": 0 00:22:38.975 }, 00:22:38.975 "claimed": false, 00:22:38.975 "zoned": false, 00:22:38.975 "supported_io_types": { 00:22:38.975 "read": true, 00:22:38.975 "write": true, 00:22:38.975 "unmap": false, 00:22:38.975 "write_zeroes": true, 00:22:38.975 "flush": false, 00:22:38.975 "reset": true, 00:22:38.975 "compare": false, 00:22:38.975 "compare_and_write": false, 00:22:38.975 "abort": false, 00:22:38.975 "nvme_admin": false, 00:22:38.975 "nvme_io": false 00:22:38.975 }, 00:22:38.975 "memory_domains": [ 00:22:38.975 { 00:22:38.975 "dma_device_id": "system", 00:22:38.975 "dma_device_type": 1 00:22:38.975 }, 00:22:38.975 { 00:22:38.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.975 "dma_device_type": 2 00:22:38.975 }, 00:22:38.975 { 00:22:38.975 "dma_device_id": "system", 00:22:38.975 "dma_device_type": 1 00:22:38.975 }, 00:22:38.975 { 00:22:38.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.975 "dma_device_type": 2 00:22:38.975 } 00:22:38.975 ], 00:22:38.975 "driver_specific": { 00:22:38.975 "raid": { 00:22:38.975 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:38.975 "strip_size_kb": 0, 00:22:38.975 "state": "online", 00:22:38.975 "raid_level": "raid1", 00:22:38.975 "superblock": true, 00:22:38.975 "num_base_bdevs": 2, 00:22:38.975 "num_base_bdevs_discovered": 2, 00:22:38.975 "num_base_bdevs_operational": 2, 00:22:38.975 "base_bdevs_list": [ 00:22:38.975 { 00:22:38.975 "name": "pt1", 00:22:38.975 "uuid": "66fd92f3-5a24-5058-bc6c-ebf2aebff278", 00:22:38.975 "is_configured": true, 00:22:38.975 "data_offset": 256, 00:22:38.975 "data_size": 7936 00:22:38.975 }, 00:22:38.975 { 00:22:38.975 "name": "pt2", 00:22:38.975 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:38.975 "is_configured": true, 00:22:38.975 "data_offset": 256, 00:22:38.975 "data_size": 7936 00:22:38.975 } 00:22:38.975 ] 00:22:38.975 } 00:22:38.975 } 00:22:38.975 }' 00:22:38.975 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:38.975 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:22:38.975 pt2' 00:22:38.975 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:38.975 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:38.975 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:39.234 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:39.234 "name": "pt1", 00:22:39.234 "aliases": [ 00:22:39.234 "66fd92f3-5a24-5058-bc6c-ebf2aebff278" 00:22:39.234 ], 00:22:39.234 "product_name": "passthru", 00:22:39.234 "block_size": 4096, 00:22:39.234 "num_blocks": 8192, 00:22:39.234 "uuid": "66fd92f3-5a24-5058-bc6c-ebf2aebff278", 00:22:39.234 "assigned_rate_limits": { 00:22:39.234 "rw_ios_per_sec": 0, 00:22:39.234 "rw_mbytes_per_sec": 0, 00:22:39.234 "r_mbytes_per_sec": 0, 00:22:39.234 "w_mbytes_per_sec": 0 00:22:39.234 }, 00:22:39.234 "claimed": true, 00:22:39.234 "claim_type": "exclusive_write", 00:22:39.234 "zoned": false, 00:22:39.234 "supported_io_types": { 00:22:39.234 "read": true, 00:22:39.234 "write": true, 00:22:39.234 "unmap": true, 00:22:39.234 "write_zeroes": true, 00:22:39.234 "flush": true, 00:22:39.234 "reset": true, 00:22:39.234 "compare": false, 00:22:39.234 "compare_and_write": false, 00:22:39.234 "abort": true, 00:22:39.234 "nvme_admin": false, 00:22:39.234 "nvme_io": false 00:22:39.234 }, 00:22:39.234 "memory_domains": [ 00:22:39.234 { 00:22:39.234 "dma_device_id": "system", 00:22:39.234 "dma_device_type": 1 00:22:39.234 }, 00:22:39.234 { 00:22:39.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.234 "dma_device_type": 2 00:22:39.234 } 00:22:39.234 ], 00:22:39.234 "driver_specific": { 00:22:39.234 "passthru": { 00:22:39.234 "name": "pt1", 00:22:39.234 "base_bdev_name": "malloc1" 00:22:39.234 } 00:22:39.234 } 00:22:39.234 }' 00:22:39.234 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:39.493 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:39.493 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:39.493 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:39.493 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:39.493 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:39.493 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:39.493 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:39.752 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:39.752 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:39.752 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:39.752 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:39.752 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:39.752 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:39.752 23:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:40.011 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:40.011 "name": "pt2", 00:22:40.011 "aliases": [ 00:22:40.011 "0be3133d-572b-5a4b-836d-2cbe5b5e5258" 00:22:40.011 ], 00:22:40.011 "product_name": "passthru", 00:22:40.011 "block_size": 4096, 00:22:40.011 "num_blocks": 8192, 00:22:40.011 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:40.011 "assigned_rate_limits": { 00:22:40.011 "rw_ios_per_sec": 0, 00:22:40.011 "rw_mbytes_per_sec": 0, 00:22:40.011 "r_mbytes_per_sec": 0, 00:22:40.011 "w_mbytes_per_sec": 0 00:22:40.011 }, 00:22:40.011 "claimed": true, 00:22:40.011 "claim_type": "exclusive_write", 00:22:40.011 "zoned": false, 00:22:40.011 "supported_io_types": { 00:22:40.011 "read": true, 00:22:40.011 "write": true, 00:22:40.011 "unmap": true, 00:22:40.011 "write_zeroes": true, 00:22:40.011 "flush": true, 00:22:40.011 "reset": true, 00:22:40.011 "compare": false, 00:22:40.011 "compare_and_write": false, 00:22:40.011 "abort": true, 00:22:40.011 "nvme_admin": false, 00:22:40.011 "nvme_io": false 00:22:40.011 }, 00:22:40.011 "memory_domains": [ 00:22:40.011 { 00:22:40.011 "dma_device_id": "system", 00:22:40.011 "dma_device_type": 1 00:22:40.011 }, 00:22:40.011 { 00:22:40.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.011 "dma_device_type": 2 00:22:40.011 } 00:22:40.011 ], 00:22:40.011 "driver_specific": { 00:22:40.011 "passthru": { 00:22:40.011 "name": "pt2", 00:22:40.011 "base_bdev_name": "malloc2" 00:22:40.011 } 00:22:40.011 } 00:22:40.011 }' 00:22:40.011 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:40.011 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:40.011 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:40.011 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:40.011 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:40.276 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:40.536 [2024-05-14 23:38:03.771795] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:40.536 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=69601175-13bb-40c0-857b-1de88d181fc9 00:22:40.536 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 69601175-13bb-40c0-857b-1de88d181fc9 ']' 00:22:40.536 23:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:40.794 [2024-05-14 23:38:04.011697] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:40.795 [2024-05-14 23:38:04.011733] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:40.795 [2024-05-14 23:38:04.011807] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.795 [2024-05-14 23:38:04.011855] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.795 [2024-05-14 23:38:04.011867] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:22:40.795 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:40.795 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.053 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:41.053 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:41.053 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:41.053 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:41.311 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:41.311 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:41.569 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:41.569 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:41.867 23:38:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:41.867 [2024-05-14 23:38:05.143835] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:41.867 [2024-05-14 23:38:05.145842] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:41.867 [2024-05-14 23:38:05.145929] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:41.867 [2024-05-14 23:38:05.146024] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:41.867 [2024-05-14 23:38:05.146078] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.867 [2024-05-14 23:38:05.146097] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:22:41.867 request: 00:22:41.867 { 00:22:41.867 "name": "raid_bdev1", 00:22:41.867 "raid_level": "raid1", 00:22:41.867 "base_bdevs": [ 00:22:41.867 "malloc1", 00:22:41.867 "malloc2" 00:22:41.867 ], 00:22:41.867 "superblock": false, 00:22:41.867 "method": "bdev_raid_create", 00:22:41.867 "req_id": 1 00:22:41.867 } 00:22:41.867 Got JSON-RPC error response 00:22:41.867 response: 00:22:41.867 { 00:22:41.867 "code": -17, 00:22:41.867 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:41.867 } 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:42.126 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:42.385 [2024-05-14 23:38:05.575847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:42.385 [2024-05-14 23:38:05.575955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.385 [2024-05-14 23:38:05.576005] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:22:42.385 [2024-05-14 23:38:05.576035] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.385 [2024-05-14 23:38:05.577865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.385 [2024-05-14 23:38:05.577910] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:42.385 [2024-05-14 23:38:05.577998] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:42.385 [2024-05-14 23:38:05.578055] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:42.385 pt1 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.385 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.644 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.644 "name": "raid_bdev1", 00:22:42.644 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:42.644 "strip_size_kb": 0, 00:22:42.644 "state": "configuring", 00:22:42.644 "raid_level": "raid1", 00:22:42.644 "superblock": true, 00:22:42.644 "num_base_bdevs": 2, 00:22:42.644 "num_base_bdevs_discovered": 1, 00:22:42.644 "num_base_bdevs_operational": 2, 00:22:42.644 "base_bdevs_list": [ 00:22:42.644 { 00:22:42.644 "name": "pt1", 00:22:42.644 "uuid": "66fd92f3-5a24-5058-bc6c-ebf2aebff278", 00:22:42.644 "is_configured": true, 00:22:42.644 "data_offset": 256, 00:22:42.644 "data_size": 7936 00:22:42.644 }, 00:22:42.644 { 00:22:42.644 "name": null, 00:22:42.644 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:42.644 "is_configured": false, 00:22:42.644 "data_offset": 256, 00:22:42.644 "data_size": 7936 00:22:42.644 } 00:22:42.644 ] 00:22:42.644 }' 00:22:42.644 23:38:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.644 23:38:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.211 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:43.211 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:43.211 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:43.211 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:43.469 [2024-05-14 23:38:06.675982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:43.469 [2024-05-14 23:38:06.676107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.469 [2024-05-14 23:38:06.676363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:22:43.469 [2024-05-14 23:38:06.676408] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.469 [2024-05-14 23:38:06.676783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.469 [2024-05-14 23:38:06.676821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:43.469 [2024-05-14 23:38:06.676906] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:43.469 [2024-05-14 23:38:06.676930] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:43.469 [2024-05-14 23:38:06.677015] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:22:43.469 [2024-05-14 23:38:06.677028] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:43.469 [2024-05-14 23:38:06.677110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:43.469 [2024-05-14 23:38:06.677342] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:22:43.469 [2024-05-14 23:38:06.677359] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:22:43.469 [2024-05-14 23:38:06.677463] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.469 pt2 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.469 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.727 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.727 "name": "raid_bdev1", 00:22:43.727 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:43.727 "strip_size_kb": 0, 00:22:43.727 "state": "online", 00:22:43.727 "raid_level": "raid1", 00:22:43.727 "superblock": true, 00:22:43.727 "num_base_bdevs": 2, 00:22:43.727 "num_base_bdevs_discovered": 2, 00:22:43.727 "num_base_bdevs_operational": 2, 00:22:43.727 "base_bdevs_list": [ 00:22:43.727 { 00:22:43.727 "name": "pt1", 00:22:43.727 "uuid": "66fd92f3-5a24-5058-bc6c-ebf2aebff278", 00:22:43.727 "is_configured": true, 00:22:43.727 "data_offset": 256, 00:22:43.727 "data_size": 7936 00:22:43.727 }, 00:22:43.727 { 00:22:43.727 "name": "pt2", 00:22:43.727 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:43.727 "is_configured": true, 00:22:43.727 "data_offset": 256, 00:22:43.727 "data_size": 7936 00:22:43.727 } 00:22:43.727 ] 00:22:43.727 }' 00:22:43.727 23:38:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.727 23:38:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:44.663 [2024-05-14 23:38:07.836260] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.663 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:44.663 "name": "raid_bdev1", 00:22:44.663 "aliases": [ 00:22:44.663 "69601175-13bb-40c0-857b-1de88d181fc9" 00:22:44.663 ], 00:22:44.663 "product_name": "Raid Volume", 00:22:44.663 "block_size": 4096, 00:22:44.663 "num_blocks": 7936, 00:22:44.663 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:44.663 "assigned_rate_limits": { 00:22:44.663 "rw_ios_per_sec": 0, 00:22:44.663 "rw_mbytes_per_sec": 0, 00:22:44.663 "r_mbytes_per_sec": 0, 00:22:44.663 "w_mbytes_per_sec": 0 00:22:44.663 }, 00:22:44.663 "claimed": false, 00:22:44.663 "zoned": false, 00:22:44.663 "supported_io_types": { 00:22:44.663 "read": true, 00:22:44.663 "write": true, 00:22:44.663 "unmap": false, 00:22:44.663 "write_zeroes": true, 00:22:44.663 "flush": false, 00:22:44.663 "reset": true, 00:22:44.663 "compare": false, 00:22:44.663 "compare_and_write": false, 00:22:44.663 "abort": false, 00:22:44.663 "nvme_admin": false, 00:22:44.663 "nvme_io": false 00:22:44.663 }, 00:22:44.663 "memory_domains": [ 00:22:44.663 { 00:22:44.663 "dma_device_id": "system", 00:22:44.663 "dma_device_type": 1 00:22:44.663 }, 00:22:44.663 { 00:22:44.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.663 "dma_device_type": 2 00:22:44.663 }, 00:22:44.663 { 00:22:44.663 "dma_device_id": "system", 00:22:44.663 "dma_device_type": 1 00:22:44.663 }, 00:22:44.663 { 00:22:44.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.663 "dma_device_type": 2 00:22:44.663 } 00:22:44.663 ], 00:22:44.663 "driver_specific": { 00:22:44.663 "raid": { 00:22:44.663 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:44.663 "strip_size_kb": 0, 00:22:44.663 "state": "online", 00:22:44.663 "raid_level": "raid1", 00:22:44.663 "superblock": true, 00:22:44.663 "num_base_bdevs": 2, 00:22:44.663 "num_base_bdevs_discovered": 2, 00:22:44.663 "num_base_bdevs_operational": 2, 00:22:44.663 "base_bdevs_list": [ 00:22:44.663 { 00:22:44.663 "name": "pt1", 00:22:44.663 "uuid": "66fd92f3-5a24-5058-bc6c-ebf2aebff278", 00:22:44.663 "is_configured": true, 00:22:44.663 "data_offset": 256, 00:22:44.663 "data_size": 7936 00:22:44.663 }, 00:22:44.663 { 00:22:44.663 "name": "pt2", 00:22:44.663 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:44.663 "is_configured": true, 00:22:44.664 "data_offset": 256, 00:22:44.664 "data_size": 7936 00:22:44.664 } 00:22:44.664 ] 00:22:44.664 } 00:22:44.664 } 00:22:44.664 }' 00:22:44.664 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:44.664 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:22:44.664 pt2' 00:22:44.664 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:44.664 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:44.664 23:38:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:44.922 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:44.922 "name": "pt1", 00:22:44.922 "aliases": [ 00:22:44.922 "66fd92f3-5a24-5058-bc6c-ebf2aebff278" 00:22:44.922 ], 00:22:44.922 "product_name": "passthru", 00:22:44.922 "block_size": 4096, 00:22:44.922 "num_blocks": 8192, 00:22:44.922 "uuid": "66fd92f3-5a24-5058-bc6c-ebf2aebff278", 00:22:44.922 "assigned_rate_limits": { 00:22:44.922 "rw_ios_per_sec": 0, 00:22:44.922 "rw_mbytes_per_sec": 0, 00:22:44.922 "r_mbytes_per_sec": 0, 00:22:44.922 "w_mbytes_per_sec": 0 00:22:44.922 }, 00:22:44.922 "claimed": true, 00:22:44.922 "claim_type": "exclusive_write", 00:22:44.922 "zoned": false, 00:22:44.922 "supported_io_types": { 00:22:44.922 "read": true, 00:22:44.922 "write": true, 00:22:44.922 "unmap": true, 00:22:44.922 "write_zeroes": true, 00:22:44.922 "flush": true, 00:22:44.922 "reset": true, 00:22:44.922 "compare": false, 00:22:44.922 "compare_and_write": false, 00:22:44.922 "abort": true, 00:22:44.922 "nvme_admin": false, 00:22:44.922 "nvme_io": false 00:22:44.922 }, 00:22:44.922 "memory_domains": [ 00:22:44.922 { 00:22:44.922 "dma_device_id": "system", 00:22:44.922 "dma_device_type": 1 00:22:44.922 }, 00:22:44.922 { 00:22:44.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.922 "dma_device_type": 2 00:22:44.922 } 00:22:44.922 ], 00:22:44.922 "driver_specific": { 00:22:44.922 "passthru": { 00:22:44.922 "name": "pt1", 00:22:44.922 "base_bdev_name": "malloc1" 00:22:44.922 } 00:22:44.922 } 00:22:44.922 }' 00:22:44.922 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:44.922 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:45.181 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:45.181 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:45.181 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:45.181 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:45.181 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:45.181 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:45.440 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:45.440 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:45.440 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:45.440 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:45.440 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:45.440 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:45.440 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:45.712 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:45.712 "name": "pt2", 00:22:45.712 "aliases": [ 00:22:45.712 "0be3133d-572b-5a4b-836d-2cbe5b5e5258" 00:22:45.712 ], 00:22:45.712 "product_name": "passthru", 00:22:45.712 "block_size": 4096, 00:22:45.712 "num_blocks": 8192, 00:22:45.712 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:45.712 "assigned_rate_limits": { 00:22:45.712 "rw_ios_per_sec": 0, 00:22:45.712 "rw_mbytes_per_sec": 0, 00:22:45.712 "r_mbytes_per_sec": 0, 00:22:45.712 "w_mbytes_per_sec": 0 00:22:45.712 }, 00:22:45.712 "claimed": true, 00:22:45.712 "claim_type": "exclusive_write", 00:22:45.712 "zoned": false, 00:22:45.712 "supported_io_types": { 00:22:45.712 "read": true, 00:22:45.712 "write": true, 00:22:45.712 "unmap": true, 00:22:45.712 "write_zeroes": true, 00:22:45.712 "flush": true, 00:22:45.712 "reset": true, 00:22:45.712 "compare": false, 00:22:45.712 "compare_and_write": false, 00:22:45.712 "abort": true, 00:22:45.712 "nvme_admin": false, 00:22:45.712 "nvme_io": false 00:22:45.712 }, 00:22:45.712 "memory_domains": [ 00:22:45.712 { 00:22:45.712 "dma_device_id": "system", 00:22:45.712 "dma_device_type": 1 00:22:45.712 }, 00:22:45.712 { 00:22:45.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.712 "dma_device_type": 2 00:22:45.712 } 00:22:45.712 ], 00:22:45.712 "driver_specific": { 00:22:45.712 "passthru": { 00:22:45.712 "name": "pt2", 00:22:45.712 "base_bdev_name": "malloc2" 00:22:45.712 } 00:22:45.712 } 00:22:45.712 }' 00:22:45.712 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:45.712 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:45.712 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:45.712 23:38:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:45.984 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:45.984 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:45.984 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:45.984 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:45.984 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:45.984 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:46.243 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:46.243 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:46.243 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:46.243 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:46.243 [2024-05-14 23:38:09.512573] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 69601175-13bb-40c0-857b-1de88d181fc9 '!=' 69601175-13bb-40c0-857b-1de88d181fc9 ']' 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:46.502 [2024-05-14 23:38:09.712543] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.502 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.760 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:46.760 "name": "raid_bdev1", 00:22:46.760 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:46.760 "strip_size_kb": 0, 00:22:46.760 "state": "online", 00:22:46.760 "raid_level": "raid1", 00:22:46.760 "superblock": true, 00:22:46.760 "num_base_bdevs": 2, 00:22:46.760 "num_base_bdevs_discovered": 1, 00:22:46.760 "num_base_bdevs_operational": 1, 00:22:46.760 "base_bdevs_list": [ 00:22:46.760 { 00:22:46.760 "name": null, 00:22:46.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.760 "is_configured": false, 00:22:46.760 "data_offset": 256, 00:22:46.760 "data_size": 7936 00:22:46.760 }, 00:22:46.760 { 00:22:46.760 "name": "pt2", 00:22:46.760 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:46.760 "is_configured": true, 00:22:46.760 "data_offset": 256, 00:22:46.760 "data_size": 7936 00:22:46.760 } 00:22:46.760 ] 00:22:46.760 }' 00:22:46.760 23:38:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:46.760 23:38:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.327 23:38:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:47.894 [2024-05-14 23:38:10.880619] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:47.894 [2024-05-14 23:38:10.880661] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:47.894 [2024-05-14 23:38:10.880731] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:47.894 [2024-05-14 23:38:10.880768] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:47.894 [2024-05-14 23:38:10.880781] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:22:47.894 23:38:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.894 23:38:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:47.894 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:47.894 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:47.894 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:47.894 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:47.894 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:48.152 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:48.152 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:48.152 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:48.152 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:48.152 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:22:48.152 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:48.412 [2024-05-14 23:38:11.592737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:48.412 [2024-05-14 23:38:11.592845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.412 [2024-05-14 23:38:11.592920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:22:48.412 [2024-05-14 23:38:11.592963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.412 [2024-05-14 23:38:11.594748] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.412 [2024-05-14 23:38:11.594814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:48.412 [2024-05-14 23:38:11.594913] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:48.412 [2024-05-14 23:38:11.594960] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:48.412 [2024-05-14 23:38:11.595041] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:22:48.412 [2024-05-14 23:38:11.595054] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:48.412 [2024-05-14 23:38:11.595170] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:48.412 [2024-05-14 23:38:11.595403] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:22:48.412 [2024-05-14 23:38:11.595418] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:22:48.412 [2024-05-14 23:38:11.595546] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.412 pt2 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.412 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.671 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.671 "name": "raid_bdev1", 00:22:48.671 "uuid": "69601175-13bb-40c0-857b-1de88d181fc9", 00:22:48.671 "strip_size_kb": 0, 00:22:48.671 "state": "online", 00:22:48.671 "raid_level": "raid1", 00:22:48.671 "superblock": true, 00:22:48.671 "num_base_bdevs": 2, 00:22:48.671 "num_base_bdevs_discovered": 1, 00:22:48.671 "num_base_bdevs_operational": 1, 00:22:48.671 "base_bdevs_list": [ 00:22:48.671 { 00:22:48.671 "name": null, 00:22:48.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.671 "is_configured": false, 00:22:48.671 "data_offset": 256, 00:22:48.671 "data_size": 7936 00:22:48.671 }, 00:22:48.671 { 00:22:48.671 "name": "pt2", 00:22:48.671 "uuid": "0be3133d-572b-5a4b-836d-2cbe5b5e5258", 00:22:48.671 "is_configured": true, 00:22:48.671 "data_offset": 256, 00:22:48.671 "data_size": 7936 00:22:48.671 } 00:22:48.671 ] 00:22:48.671 }' 00:22:48.671 23:38:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.671 23:38:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.239 23:38:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:22:49.239 23:38:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:49.239 23:38:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:22:49.499 [2024-05-14 23:38:12.729064] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # '[' 69601175-13bb-40c0-857b-1de88d181fc9 '!=' 69601175-13bb-40c0-857b-1de88d181fc9 ']' 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@568 -- # killprocess 73290 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 73290 ']' 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 73290 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73290 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:49.499 killing process with pid 73290 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73290' 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 73290 00:22:49.499 23:38:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 73290 00:22:49.499 [2024-05-14 23:38:12.768536] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:49.499 [2024-05-14 23:38:12.768618] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.499 [2024-05-14 23:38:12.768657] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.499 [2024-05-14 23:38:12.768669] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:22:49.758 [2024-05-14 23:38:12.934684] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:51.133 23:38:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # return 0 00:22:51.133 00:22:51.133 real 0m15.177s 00:22:51.133 user 0m27.770s 00:22:51.133 sys 0m1.529s 00:22:51.133 23:38:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:51.133 23:38:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.133 ************************************ 00:22:51.133 END TEST raid_superblock_test_4k 00:22:51.133 ************************************ 00:22:51.133 23:38:14 bdev_raid -- bdev/bdev_raid.sh@846 -- # '[' '' = true ']' 00:22:51.133 23:38:14 bdev_raid -- bdev/bdev_raid.sh@850 -- # base_malloc_params='-m 32' 00:22:51.133 23:38:14 bdev_raid -- bdev/bdev_raid.sh@851 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:51.133 23:38:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:51.133 23:38:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:51.133 23:38:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:51.133 ************************************ 00:22:51.133 START TEST raid_state_function_test_sb_md_separate 00:22:51.133 ************************************ 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:51.133 Process raid pid: 73768 00:22:51.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # raid_pid=73768 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 73768' 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@247 -- # waitforlisten 73768 /var/tmp/spdk-raid.sock 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 73768 ']' 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:51.133 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.134 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:51.134 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.134 23:38:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.134 [2024-05-14 23:38:14.329605] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:22:51.134 [2024-05-14 23:38:14.329857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.392 [2024-05-14 23:38:14.502881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.650 [2024-05-14 23:38:14.729999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.650 [2024-05-14 23:38:14.926672] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.908 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.908 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:22:51.908 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:52.166 [2024-05-14 23:38:15.361377] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:52.166 [2024-05-14 23:38:15.361474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:52.166 [2024-05-14 23:38:15.361495] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:52.166 [2024-05-14 23:38:15.361526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.166 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.425 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:52.425 "name": "Existed_Raid", 00:22:52.425 "uuid": "45a9a9f7-ed56-47df-ba95-276ac05788ff", 00:22:52.425 "strip_size_kb": 0, 00:22:52.425 "state": "configuring", 00:22:52.425 "raid_level": "raid1", 00:22:52.425 "superblock": true, 00:22:52.425 "num_base_bdevs": 2, 00:22:52.425 "num_base_bdevs_discovered": 0, 00:22:52.425 "num_base_bdevs_operational": 2, 00:22:52.425 "base_bdevs_list": [ 00:22:52.425 { 00:22:52.425 "name": "BaseBdev1", 00:22:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.425 "is_configured": false, 00:22:52.425 "data_offset": 0, 00:22:52.425 "data_size": 0 00:22:52.425 }, 00:22:52.425 { 00:22:52.425 "name": "BaseBdev2", 00:22:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.425 "is_configured": false, 00:22:52.425 "data_offset": 0, 00:22:52.425 "data_size": 0 00:22:52.425 } 00:22:52.425 ] 00:22:52.425 }' 00:22:52.425 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:52.425 23:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.992 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:53.256 [2024-05-14 23:38:16.369381] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:53.256 [2024-05-14 23:38:16.369425] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:22:53.256 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:53.515 [2024-05-14 23:38:16.557477] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:53.515 [2024-05-14 23:38:16.557609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:53.515 [2024-05-14 23:38:16.557626] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:53.515 [2024-05-14 23:38:16.557655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:53.515 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:53.515 [2024-05-14 23:38:16.791936] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:53.515 BaseBdev1 00:22:53.774 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:22:53.774 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:53.774 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:53.774 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:22:53.774 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:53.774 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:53.774 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:53.774 23:38:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:54.034 [ 00:22:54.034 { 00:22:54.034 "name": "BaseBdev1", 00:22:54.034 "aliases": [ 00:22:54.034 "9548db1e-bfb7-48f4-92e8-f4c4a1aef038" 00:22:54.034 ], 00:22:54.034 "product_name": "Malloc disk", 00:22:54.034 "block_size": 4096, 00:22:54.034 "num_blocks": 8192, 00:22:54.034 "uuid": "9548db1e-bfb7-48f4-92e8-f4c4a1aef038", 00:22:54.034 "md_size": 32, 00:22:54.034 "md_interleave": false, 00:22:54.034 "dif_type": 0, 00:22:54.034 "assigned_rate_limits": { 00:22:54.034 "rw_ios_per_sec": 0, 00:22:54.034 "rw_mbytes_per_sec": 0, 00:22:54.034 "r_mbytes_per_sec": 0, 00:22:54.034 "w_mbytes_per_sec": 0 00:22:54.034 }, 00:22:54.034 "claimed": true, 00:22:54.034 "claim_type": "exclusive_write", 00:22:54.034 "zoned": false, 00:22:54.034 "supported_io_types": { 00:22:54.034 "read": true, 00:22:54.034 "write": true, 00:22:54.034 "unmap": true, 00:22:54.034 "write_zeroes": true, 00:22:54.034 "flush": true, 00:22:54.034 "reset": true, 00:22:54.034 "compare": false, 00:22:54.034 "compare_and_write": false, 00:22:54.034 "abort": true, 00:22:54.034 "nvme_admin": false, 00:22:54.034 "nvme_io": false 00:22:54.034 }, 00:22:54.034 "memory_domains": [ 00:22:54.034 { 00:22:54.034 "dma_device_id": "system", 00:22:54.034 "dma_device_type": 1 00:22:54.034 }, 00:22:54.034 { 00:22:54.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.034 "dma_device_type": 2 00:22:54.034 } 00:22:54.034 ], 00:22:54.034 "driver_specific": {} 00:22:54.034 } 00:22:54.034 ] 00:22:54.034 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:22:54.034 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:54.034 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:54.034 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:54.034 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:54.034 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:54.035 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:54.035 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.035 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.035 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.035 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.035 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.035 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.293 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.293 "name": "Existed_Raid", 00:22:54.293 "uuid": "12b2da7c-f1ff-4d70-8883-1f34bee55557", 00:22:54.293 "strip_size_kb": 0, 00:22:54.293 "state": "configuring", 00:22:54.293 "raid_level": "raid1", 00:22:54.293 "superblock": true, 00:22:54.293 "num_base_bdevs": 2, 00:22:54.293 "num_base_bdevs_discovered": 1, 00:22:54.293 "num_base_bdevs_operational": 2, 00:22:54.293 "base_bdevs_list": [ 00:22:54.293 { 00:22:54.293 "name": "BaseBdev1", 00:22:54.293 "uuid": "9548db1e-bfb7-48f4-92e8-f4c4a1aef038", 00:22:54.293 "is_configured": true, 00:22:54.293 "data_offset": 256, 00:22:54.293 "data_size": 7936 00:22:54.293 }, 00:22:54.293 { 00:22:54.293 "name": "BaseBdev2", 00:22:54.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.293 "is_configured": false, 00:22:54.293 "data_offset": 0, 00:22:54.293 "data_size": 0 00:22:54.293 } 00:22:54.293 ] 00:22:54.293 }' 00:22:54.293 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.293 23:38:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.860 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:55.119 [2024-05-14 23:38:18.316377] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:55.119 [2024-05-14 23:38:18.316441] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:22:55.119 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:55.378 [2024-05-14 23:38:18.532539] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:55.378 [2024-05-14 23:38:18.533944] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:55.378 [2024-05-14 23:38:18.534012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.378 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.637 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:55.637 "name": "Existed_Raid", 00:22:55.637 "uuid": "bb306f7d-892d-4461-b5ac-633e04fef6c7", 00:22:55.637 "strip_size_kb": 0, 00:22:55.637 "state": "configuring", 00:22:55.637 "raid_level": "raid1", 00:22:55.637 "superblock": true, 00:22:55.637 "num_base_bdevs": 2, 00:22:55.637 "num_base_bdevs_discovered": 1, 00:22:55.637 "num_base_bdevs_operational": 2, 00:22:55.637 "base_bdevs_list": [ 00:22:55.637 { 00:22:55.637 "name": "BaseBdev1", 00:22:55.637 "uuid": "9548db1e-bfb7-48f4-92e8-f4c4a1aef038", 00:22:55.637 "is_configured": true, 00:22:55.637 "data_offset": 256, 00:22:55.637 "data_size": 7936 00:22:55.637 }, 00:22:55.637 { 00:22:55.637 "name": "BaseBdev2", 00:22:55.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.637 "is_configured": false, 00:22:55.637 "data_offset": 0, 00:22:55.637 "data_size": 0 00:22:55.637 } 00:22:55.637 ] 00:22:55.637 }' 00:22:55.637 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:55.637 23:38:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:56.205 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:56.466 [2024-05-14 23:38:19.643827] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:56.466 [2024-05-14 23:38:19.644002] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:22:56.466 [2024-05-14 23:38:19.644018] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:56.466 [2024-05-14 23:38:19.644116] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:22:56.466 BaseBdev2 00:22:56.466 [2024-05-14 23:38:19.644404] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:22:56.466 [2024-05-14 23:38:19.644426] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:22:56.466 [2024-05-14 23:38:19.644524] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.466 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:22:56.466 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:56.466 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:56.466 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:22:56.466 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:56.466 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:56.466 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:56.726 23:38:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:56.984 [ 00:22:56.984 { 00:22:56.984 "name": "BaseBdev2", 00:22:56.984 "aliases": [ 00:22:56.984 "e9e51e73-5478-46b6-952d-a370b101cc54" 00:22:56.984 ], 00:22:56.984 "product_name": "Malloc disk", 00:22:56.984 "block_size": 4096, 00:22:56.984 "num_blocks": 8192, 00:22:56.984 "uuid": "e9e51e73-5478-46b6-952d-a370b101cc54", 00:22:56.984 "md_size": 32, 00:22:56.984 "md_interleave": false, 00:22:56.984 "dif_type": 0, 00:22:56.984 "assigned_rate_limits": { 00:22:56.984 "rw_ios_per_sec": 0, 00:22:56.984 "rw_mbytes_per_sec": 0, 00:22:56.984 "r_mbytes_per_sec": 0, 00:22:56.984 "w_mbytes_per_sec": 0 00:22:56.984 }, 00:22:56.984 "claimed": true, 00:22:56.984 "claim_type": "exclusive_write", 00:22:56.984 "zoned": false, 00:22:56.984 "supported_io_types": { 00:22:56.984 "read": true, 00:22:56.984 "write": true, 00:22:56.984 "unmap": true, 00:22:56.984 "write_zeroes": true, 00:22:56.984 "flush": true, 00:22:56.984 "reset": true, 00:22:56.984 "compare": false, 00:22:56.984 "compare_and_write": false, 00:22:56.984 "abort": true, 00:22:56.984 "nvme_admin": false, 00:22:56.984 "nvme_io": false 00:22:56.984 }, 00:22:56.984 "memory_domains": [ 00:22:56.984 { 00:22:56.984 "dma_device_id": "system", 00:22:56.984 "dma_device_type": 1 00:22:56.984 }, 00:22:56.984 { 00:22:56.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.984 "dma_device_type": 2 00:22:56.984 } 00:22:56.984 ], 00:22:56.985 "driver_specific": {} 00:22:56.985 } 00:22:56.985 ] 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.985 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.244 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.244 "name": "Existed_Raid", 00:22:57.244 "uuid": "bb306f7d-892d-4461-b5ac-633e04fef6c7", 00:22:57.244 "strip_size_kb": 0, 00:22:57.244 "state": "online", 00:22:57.244 "raid_level": "raid1", 00:22:57.244 "superblock": true, 00:22:57.244 "num_base_bdevs": 2, 00:22:57.244 "num_base_bdevs_discovered": 2, 00:22:57.244 "num_base_bdevs_operational": 2, 00:22:57.244 "base_bdevs_list": [ 00:22:57.244 { 00:22:57.244 "name": "BaseBdev1", 00:22:57.244 "uuid": "9548db1e-bfb7-48f4-92e8-f4c4a1aef038", 00:22:57.244 "is_configured": true, 00:22:57.244 "data_offset": 256, 00:22:57.244 "data_size": 7936 00:22:57.244 }, 00:22:57.244 { 00:22:57.244 "name": "BaseBdev2", 00:22:57.244 "uuid": "e9e51e73-5478-46b6-952d-a370b101cc54", 00:22:57.244 "is_configured": true, 00:22:57.244 "data_offset": 256, 00:22:57.244 "data_size": 7936 00:22:57.244 } 00:22:57.244 ] 00:22:57.244 }' 00:22:57.244 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.244 23:38:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.811 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:22:57.811 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:22:57.811 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:57.811 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:57.811 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:57.811 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:22:57.811 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:57.811 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:58.070 [2024-05-14 23:38:21.252402] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:58.070 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:58.070 "name": "Existed_Raid", 00:22:58.070 "aliases": [ 00:22:58.070 "bb306f7d-892d-4461-b5ac-633e04fef6c7" 00:22:58.070 ], 00:22:58.070 "product_name": "Raid Volume", 00:22:58.070 "block_size": 4096, 00:22:58.070 "num_blocks": 7936, 00:22:58.070 "uuid": "bb306f7d-892d-4461-b5ac-633e04fef6c7", 00:22:58.070 "md_size": 32, 00:22:58.070 "md_interleave": false, 00:22:58.070 "dif_type": 0, 00:22:58.070 "assigned_rate_limits": { 00:22:58.070 "rw_ios_per_sec": 0, 00:22:58.070 "rw_mbytes_per_sec": 0, 00:22:58.070 "r_mbytes_per_sec": 0, 00:22:58.070 "w_mbytes_per_sec": 0 00:22:58.070 }, 00:22:58.070 "claimed": false, 00:22:58.070 "zoned": false, 00:22:58.070 "supported_io_types": { 00:22:58.070 "read": true, 00:22:58.070 "write": true, 00:22:58.070 "unmap": false, 00:22:58.070 "write_zeroes": true, 00:22:58.070 "flush": false, 00:22:58.070 "reset": true, 00:22:58.070 "compare": false, 00:22:58.070 "compare_and_write": false, 00:22:58.070 "abort": false, 00:22:58.070 "nvme_admin": false, 00:22:58.070 "nvme_io": false 00:22:58.070 }, 00:22:58.070 "memory_domains": [ 00:22:58.070 { 00:22:58.070 "dma_device_id": "system", 00:22:58.070 "dma_device_type": 1 00:22:58.070 }, 00:22:58.070 { 00:22:58.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.070 "dma_device_type": 2 00:22:58.070 }, 00:22:58.070 { 00:22:58.070 "dma_device_id": "system", 00:22:58.070 "dma_device_type": 1 00:22:58.070 }, 00:22:58.070 { 00:22:58.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.070 "dma_device_type": 2 00:22:58.070 } 00:22:58.070 ], 00:22:58.070 "driver_specific": { 00:22:58.070 "raid": { 00:22:58.070 "uuid": "bb306f7d-892d-4461-b5ac-633e04fef6c7", 00:22:58.070 "strip_size_kb": 0, 00:22:58.070 "state": "online", 00:22:58.070 "raid_level": "raid1", 00:22:58.070 "superblock": true, 00:22:58.070 "num_base_bdevs": 2, 00:22:58.070 "num_base_bdevs_discovered": 2, 00:22:58.070 "num_base_bdevs_operational": 2, 00:22:58.070 "base_bdevs_list": [ 00:22:58.070 { 00:22:58.070 "name": "BaseBdev1", 00:22:58.070 "uuid": "9548db1e-bfb7-48f4-92e8-f4c4a1aef038", 00:22:58.070 "is_configured": true, 00:22:58.070 "data_offset": 256, 00:22:58.070 "data_size": 7936 00:22:58.070 }, 00:22:58.070 { 00:22:58.070 "name": "BaseBdev2", 00:22:58.070 "uuid": "e9e51e73-5478-46b6-952d-a370b101cc54", 00:22:58.070 "is_configured": true, 00:22:58.070 "data_offset": 256, 00:22:58.070 "data_size": 7936 00:22:58.070 } 00:22:58.070 ] 00:22:58.070 } 00:22:58.070 } 00:22:58.070 }' 00:22:58.070 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:58.070 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:22:58.070 BaseBdev2' 00:22:58.070 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:58.070 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:58.070 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:58.330 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:58.330 "name": "BaseBdev1", 00:22:58.330 "aliases": [ 00:22:58.330 "9548db1e-bfb7-48f4-92e8-f4c4a1aef038" 00:22:58.330 ], 00:22:58.330 "product_name": "Malloc disk", 00:22:58.330 "block_size": 4096, 00:22:58.330 "num_blocks": 8192, 00:22:58.330 "uuid": "9548db1e-bfb7-48f4-92e8-f4c4a1aef038", 00:22:58.330 "md_size": 32, 00:22:58.330 "md_interleave": false, 00:22:58.330 "dif_type": 0, 00:22:58.330 "assigned_rate_limits": { 00:22:58.330 "rw_ios_per_sec": 0, 00:22:58.330 "rw_mbytes_per_sec": 0, 00:22:58.330 "r_mbytes_per_sec": 0, 00:22:58.330 "w_mbytes_per_sec": 0 00:22:58.330 }, 00:22:58.330 "claimed": true, 00:22:58.330 "claim_type": "exclusive_write", 00:22:58.330 "zoned": false, 00:22:58.330 "supported_io_types": { 00:22:58.330 "read": true, 00:22:58.330 "write": true, 00:22:58.330 "unmap": true, 00:22:58.330 "write_zeroes": true, 00:22:58.330 "flush": true, 00:22:58.330 "reset": true, 00:22:58.330 "compare": false, 00:22:58.330 "compare_and_write": false, 00:22:58.330 "abort": true, 00:22:58.330 "nvme_admin": false, 00:22:58.330 "nvme_io": false 00:22:58.330 }, 00:22:58.330 "memory_domains": [ 00:22:58.330 { 00:22:58.330 "dma_device_id": "system", 00:22:58.330 "dma_device_type": 1 00:22:58.330 }, 00:22:58.330 { 00:22:58.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.330 "dma_device_type": 2 00:22:58.330 } 00:22:58.330 ], 00:22:58.330 "driver_specific": {} 00:22:58.330 }' 00:22:58.330 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:58.330 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:58.589 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:58.589 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:58.589 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:58.589 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:58.589 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:58.589 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:58.589 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:22:58.589 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:58.848 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:58.848 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:58.848 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:58.848 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:58.848 23:38:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:59.107 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:59.107 "name": "BaseBdev2", 00:22:59.107 "aliases": [ 00:22:59.107 "e9e51e73-5478-46b6-952d-a370b101cc54" 00:22:59.107 ], 00:22:59.107 "product_name": "Malloc disk", 00:22:59.107 "block_size": 4096, 00:22:59.107 "num_blocks": 8192, 00:22:59.107 "uuid": "e9e51e73-5478-46b6-952d-a370b101cc54", 00:22:59.107 "md_size": 32, 00:22:59.107 "md_interleave": false, 00:22:59.107 "dif_type": 0, 00:22:59.107 "assigned_rate_limits": { 00:22:59.107 "rw_ios_per_sec": 0, 00:22:59.107 "rw_mbytes_per_sec": 0, 00:22:59.107 "r_mbytes_per_sec": 0, 00:22:59.107 "w_mbytes_per_sec": 0 00:22:59.107 }, 00:22:59.107 "claimed": true, 00:22:59.107 "claim_type": "exclusive_write", 00:22:59.107 "zoned": false, 00:22:59.107 "supported_io_types": { 00:22:59.107 "read": true, 00:22:59.107 "write": true, 00:22:59.107 "unmap": true, 00:22:59.107 "write_zeroes": true, 00:22:59.107 "flush": true, 00:22:59.107 "reset": true, 00:22:59.107 "compare": false, 00:22:59.107 "compare_and_write": false, 00:22:59.107 "abort": true, 00:22:59.107 "nvme_admin": false, 00:22:59.107 "nvme_io": false 00:22:59.107 }, 00:22:59.107 "memory_domains": [ 00:22:59.107 { 00:22:59.107 "dma_device_id": "system", 00:22:59.107 "dma_device_type": 1 00:22:59.107 }, 00:22:59.107 { 00:22:59.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.107 "dma_device_type": 2 00:22:59.107 } 00:22:59.107 ], 00:22:59.107 "driver_specific": {} 00:22:59.107 }' 00:22:59.107 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:59.107 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:59.107 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:22:59.107 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:59.366 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:59.366 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:22:59.366 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:59.366 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:59.366 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:22:59.366 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:59.366 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:59.624 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:22:59.624 23:38:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:59.882 [2024-05-14 23:38:22.916812] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # local expected_state 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.882 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.139 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.139 "name": "Existed_Raid", 00:23:00.139 "uuid": "bb306f7d-892d-4461-b5ac-633e04fef6c7", 00:23:00.139 "strip_size_kb": 0, 00:23:00.139 "state": "online", 00:23:00.139 "raid_level": "raid1", 00:23:00.139 "superblock": true, 00:23:00.139 "num_base_bdevs": 2, 00:23:00.139 "num_base_bdevs_discovered": 1, 00:23:00.139 "num_base_bdevs_operational": 1, 00:23:00.139 "base_bdevs_list": [ 00:23:00.139 { 00:23:00.139 "name": null, 00:23:00.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.139 "is_configured": false, 00:23:00.139 "data_offset": 256, 00:23:00.139 "data_size": 7936 00:23:00.139 }, 00:23:00.139 { 00:23:00.139 "name": "BaseBdev2", 00:23:00.139 "uuid": "e9e51e73-5478-46b6-952d-a370b101cc54", 00:23:00.139 "is_configured": true, 00:23:00.139 "data_offset": 256, 00:23:00.139 "data_size": 7936 00:23:00.139 } 00:23:00.139 ] 00:23:00.139 }' 00:23:00.139 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.139 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.702 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:00.703 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:00.703 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.703 23:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:23:00.960 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:23:00.960 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:00.960 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:01.218 [2024-05-14 23:38:24.382300] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:01.218 [2024-05-14 23:38:24.382404] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:01.218 [2024-05-14 23:38:24.470278] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.218 [2024-05-14 23:38:24.470376] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.218 [2024-05-14 23:38:24.470393] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:23:01.218 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:01.218 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:01.218 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.218 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:23:01.476 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:23:01.476 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:23:01.476 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@342 -- # killprocess 73768 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 73768 ']' 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 73768 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73768 00:23:01.477 killing process with pid 73768 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73768' 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 73768 00:23:01.477 23:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 73768 00:23:01.477 [2024-05-14 23:38:24.739188] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:01.477 [2024-05-14 23:38:24.739349] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:02.850 ************************************ 00:23:02.850 END TEST raid_state_function_test_sb_md_separate 00:23:02.850 ************************************ 00:23:02.850 23:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@344 -- # return 0 00:23:02.850 00:23:02.850 real 0m11.773s 00:23:02.850 user 0m20.754s 00:23:02.850 sys 0m1.297s 00:23:02.851 23:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:02.851 23:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:02.851 23:38:25 bdev_raid -- bdev/bdev_raid.sh@852 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:23:02.851 23:38:25 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:02.851 23:38:25 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:02.851 23:38:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:02.851 ************************************ 00:23:02.851 START TEST raid_superblock_test_md_separate 00:23:02.851 ************************************ 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:23:02.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=74152 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 74152 /var/tmp/spdk-raid.sock 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 74152 ']' 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:02.851 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:03.109 [2024-05-14 23:38:26.140137] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:23:03.109 [2024-05-14 23:38:26.140321] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74152 ] 00:23:03.109 [2024-05-14 23:38:26.289720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.366 [2024-05-14 23:38:26.498343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.623 [2024-05-14 23:38:26.693487] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:03.881 23:38:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:23:03.881 malloc1 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:04.139 [2024-05-14 23:38:27.393132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:04.139 [2024-05-14 23:38:27.393241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.139 [2024-05-14 23:38:27.393298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:23:04.139 [2024-05-14 23:38:27.393341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.139 [2024-05-14 23:38:27.395010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.139 [2024-05-14 23:38:27.395052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:04.139 pt1 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:04.139 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:23:04.397 malloc2 00:23:04.397 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:04.655 [2024-05-14 23:38:27.864048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:04.655 [2024-05-14 23:38:27.864134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.655 [2024-05-14 23:38:27.864346] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:23:04.655 [2024-05-14 23:38:27.864395] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.655 pt2 00:23:04.655 [2024-05-14 23:38:27.865915] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.655 [2024-05-14 23:38:27.865968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:04.655 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:04.655 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:04.655 23:38:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:23:04.913 [2024-05-14 23:38:28.060172] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:04.913 [2024-05-14 23:38:28.061754] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:04.913 [2024-05-14 23:38:28.061909] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:23:04.913 [2024-05-14 23:38:28.061924] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:04.913 [2024-05-14 23:38:28.062058] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:04.913 [2024-05-14 23:38:28.062143] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:23:04.913 [2024-05-14 23:38:28.062173] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:23:04.913 [2024-05-14 23:38:28.062261] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.913 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.171 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:05.171 "name": "raid_bdev1", 00:23:05.171 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:05.171 "strip_size_kb": 0, 00:23:05.171 "state": "online", 00:23:05.171 "raid_level": "raid1", 00:23:05.171 "superblock": true, 00:23:05.171 "num_base_bdevs": 2, 00:23:05.171 "num_base_bdevs_discovered": 2, 00:23:05.171 "num_base_bdevs_operational": 2, 00:23:05.171 "base_bdevs_list": [ 00:23:05.171 { 00:23:05.171 "name": "pt1", 00:23:05.171 "uuid": "58495e46-853c-555c-8ca5-94082bfcdbaa", 00:23:05.171 "is_configured": true, 00:23:05.171 "data_offset": 256, 00:23:05.171 "data_size": 7936 00:23:05.171 }, 00:23:05.171 { 00:23:05.171 "name": "pt2", 00:23:05.171 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:05.171 "is_configured": true, 00:23:05.171 "data_offset": 256, 00:23:05.171 "data_size": 7936 00:23:05.171 } 00:23:05.171 ] 00:23:05.171 }' 00:23:05.171 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:05.171 23:38:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:05.737 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:05.737 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:23:05.737 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:05.737 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:05.737 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:05.737 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:23:05.737 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:05.737 23:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:05.995 [2024-05-14 23:38:29.128392] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:05.995 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:05.995 "name": "raid_bdev1", 00:23:05.995 "aliases": [ 00:23:05.995 "81f79340-594d-43bb-bbaf-33855af6f7bf" 00:23:05.995 ], 00:23:05.995 "product_name": "Raid Volume", 00:23:05.995 "block_size": 4096, 00:23:05.995 "num_blocks": 7936, 00:23:05.995 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:05.995 "md_size": 32, 00:23:05.995 "md_interleave": false, 00:23:05.995 "dif_type": 0, 00:23:05.995 "assigned_rate_limits": { 00:23:05.995 "rw_ios_per_sec": 0, 00:23:05.995 "rw_mbytes_per_sec": 0, 00:23:05.995 "r_mbytes_per_sec": 0, 00:23:05.995 "w_mbytes_per_sec": 0 00:23:05.995 }, 00:23:05.995 "claimed": false, 00:23:05.995 "zoned": false, 00:23:05.995 "supported_io_types": { 00:23:05.995 "read": true, 00:23:05.995 "write": true, 00:23:05.995 "unmap": false, 00:23:05.995 "write_zeroes": true, 00:23:05.995 "flush": false, 00:23:05.995 "reset": true, 00:23:05.995 "compare": false, 00:23:05.995 "compare_and_write": false, 00:23:05.995 "abort": false, 00:23:05.995 "nvme_admin": false, 00:23:05.995 "nvme_io": false 00:23:05.995 }, 00:23:05.995 "memory_domains": [ 00:23:05.995 { 00:23:05.995 "dma_device_id": "system", 00:23:05.995 "dma_device_type": 1 00:23:05.995 }, 00:23:05.995 { 00:23:05.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.995 "dma_device_type": 2 00:23:05.995 }, 00:23:05.995 { 00:23:05.995 "dma_device_id": "system", 00:23:05.995 "dma_device_type": 1 00:23:05.995 }, 00:23:05.995 { 00:23:05.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.996 "dma_device_type": 2 00:23:05.996 } 00:23:05.996 ], 00:23:05.996 "driver_specific": { 00:23:05.996 "raid": { 00:23:05.996 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:05.996 "strip_size_kb": 0, 00:23:05.996 "state": "online", 00:23:05.996 "raid_level": "raid1", 00:23:05.996 "superblock": true, 00:23:05.996 "num_base_bdevs": 2, 00:23:05.996 "num_base_bdevs_discovered": 2, 00:23:05.996 "num_base_bdevs_operational": 2, 00:23:05.996 "base_bdevs_list": [ 00:23:05.996 { 00:23:05.996 "name": "pt1", 00:23:05.996 "uuid": "58495e46-853c-555c-8ca5-94082bfcdbaa", 00:23:05.996 "is_configured": true, 00:23:05.996 "data_offset": 256, 00:23:05.996 "data_size": 7936 00:23:05.996 }, 00:23:05.996 { 00:23:05.996 "name": "pt2", 00:23:05.996 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:05.996 "is_configured": true, 00:23:05.996 "data_offset": 256, 00:23:05.996 "data_size": 7936 00:23:05.996 } 00:23:05.996 ] 00:23:05.996 } 00:23:05.996 } 00:23:05.996 }' 00:23:05.996 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:05.996 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:23:05.996 pt2' 00:23:05.996 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:05.996 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:05.996 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:06.254 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:06.254 "name": "pt1", 00:23:06.254 "aliases": [ 00:23:06.254 "58495e46-853c-555c-8ca5-94082bfcdbaa" 00:23:06.254 ], 00:23:06.254 "product_name": "passthru", 00:23:06.254 "block_size": 4096, 00:23:06.254 "num_blocks": 8192, 00:23:06.254 "uuid": "58495e46-853c-555c-8ca5-94082bfcdbaa", 00:23:06.254 "md_size": 32, 00:23:06.254 "md_interleave": false, 00:23:06.254 "dif_type": 0, 00:23:06.254 "assigned_rate_limits": { 00:23:06.254 "rw_ios_per_sec": 0, 00:23:06.254 "rw_mbytes_per_sec": 0, 00:23:06.254 "r_mbytes_per_sec": 0, 00:23:06.254 "w_mbytes_per_sec": 0 00:23:06.254 }, 00:23:06.254 "claimed": true, 00:23:06.254 "claim_type": "exclusive_write", 00:23:06.254 "zoned": false, 00:23:06.254 "supported_io_types": { 00:23:06.254 "read": true, 00:23:06.254 "write": true, 00:23:06.254 "unmap": true, 00:23:06.254 "write_zeroes": true, 00:23:06.254 "flush": true, 00:23:06.254 "reset": true, 00:23:06.254 "compare": false, 00:23:06.254 "compare_and_write": false, 00:23:06.254 "abort": true, 00:23:06.254 "nvme_admin": false, 00:23:06.254 "nvme_io": false 00:23:06.254 }, 00:23:06.254 "memory_domains": [ 00:23:06.254 { 00:23:06.254 "dma_device_id": "system", 00:23:06.254 "dma_device_type": 1 00:23:06.254 }, 00:23:06.254 { 00:23:06.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.254 "dma_device_type": 2 00:23:06.254 } 00:23:06.254 ], 00:23:06.254 "driver_specific": { 00:23:06.254 "passthru": { 00:23:06.254 "name": "pt1", 00:23:06.254 "base_bdev_name": "malloc1" 00:23:06.254 } 00:23:06.254 } 00:23:06.254 }' 00:23:06.254 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:06.254 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:06.254 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:06.254 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:06.512 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:06.512 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:06.512 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:06.512 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:06.512 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:23:06.512 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:06.512 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:06.771 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:06.771 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:06.771 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:06.771 23:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:06.771 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:06.771 "name": "pt2", 00:23:06.771 "aliases": [ 00:23:06.771 "fc85b49b-ed9a-58b8-9764-2217e7b90ba6" 00:23:06.771 ], 00:23:06.771 "product_name": "passthru", 00:23:06.771 "block_size": 4096, 00:23:06.771 "num_blocks": 8192, 00:23:06.771 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:06.771 "md_size": 32, 00:23:06.771 "md_interleave": false, 00:23:06.771 "dif_type": 0, 00:23:06.771 "assigned_rate_limits": { 00:23:06.771 "rw_ios_per_sec": 0, 00:23:06.771 "rw_mbytes_per_sec": 0, 00:23:06.771 "r_mbytes_per_sec": 0, 00:23:06.771 "w_mbytes_per_sec": 0 00:23:06.771 }, 00:23:06.771 "claimed": true, 00:23:06.771 "claim_type": "exclusive_write", 00:23:06.771 "zoned": false, 00:23:06.771 "supported_io_types": { 00:23:06.771 "read": true, 00:23:06.771 "write": true, 00:23:06.771 "unmap": true, 00:23:06.771 "write_zeroes": true, 00:23:06.771 "flush": true, 00:23:06.771 "reset": true, 00:23:06.771 "compare": false, 00:23:06.771 "compare_and_write": false, 00:23:06.771 "abort": true, 00:23:06.771 "nvme_admin": false, 00:23:06.771 "nvme_io": false 00:23:06.771 }, 00:23:06.771 "memory_domains": [ 00:23:06.771 { 00:23:06.771 "dma_device_id": "system", 00:23:06.771 "dma_device_type": 1 00:23:06.771 }, 00:23:06.771 { 00:23:06.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.771 "dma_device_type": 2 00:23:06.771 } 00:23:06.771 ], 00:23:06.771 "driver_specific": { 00:23:06.771 "passthru": { 00:23:06.771 "name": "pt2", 00:23:06.771 "base_bdev_name": "malloc2" 00:23:06.771 } 00:23:06.771 } 00:23:06.771 }' 00:23:06.771 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:07.030 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:07.030 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:07.030 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:07.030 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:07.030 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:07.030 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:07.030 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:07.288 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:23:07.288 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:07.288 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:07.288 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:07.288 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:07.288 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:07.547 [2024-05-14 23:38:30.624600] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.547 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=81f79340-594d-43bb-bbaf-33855af6f7bf 00:23:07.547 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 81f79340-594d-43bb-bbaf-33855af6f7bf ']' 00:23:07.547 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:07.805 [2024-05-14 23:38:30.852481] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:07.805 [2024-05-14 23:38:30.852519] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:07.805 [2024-05-14 23:38:30.852595] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.805 [2024-05-14 23:38:30.852637] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:07.805 [2024-05-14 23:38:30.852649] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:23:07.805 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:07.805 23:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.064 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:08.064 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:08.064 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:08.064 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:08.064 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:08.064 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:08.325 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:08.325 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:08.584 23:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:08.843 [2024-05-14 23:38:31.988607] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:08.843 [2024-05-14 23:38:31.990211] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:08.843 [2024-05-14 23:38:31.990269] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:08.843 [2024-05-14 23:38:31.990348] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:08.843 [2024-05-14 23:38:31.990388] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.843 [2024-05-14 23:38:31.990400] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:23:08.843 request: 00:23:08.843 { 00:23:08.843 "name": "raid_bdev1", 00:23:08.843 "raid_level": "raid1", 00:23:08.843 "base_bdevs": [ 00:23:08.843 "malloc1", 00:23:08.843 "malloc2" 00:23:08.843 ], 00:23:08.843 "superblock": false, 00:23:08.843 "method": "bdev_raid_create", 00:23:08.843 "req_id": 1 00:23:08.843 } 00:23:08.843 Got JSON-RPC error response 00:23:08.843 response: 00:23:08.843 { 00:23:08.843 "code": -17, 00:23:08.843 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:08.843 } 00:23:08.843 23:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:23:08.843 23:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.843 23:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.843 23:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.843 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.843 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:09.101 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:09.101 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:09.101 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:09.359 [2024-05-14 23:38:32.464649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:09.359 [2024-05-14 23:38:32.464764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.359 [2024-05-14 23:38:32.464808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:23:09.359 [2024-05-14 23:38:32.464838] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.359 pt1 00:23:09.359 [2024-05-14 23:38:32.466861] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.359 [2024-05-14 23:38:32.466909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:09.359 [2024-05-14 23:38:32.467003] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:09.359 [2024-05-14 23:38:32.467074] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.359 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.618 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.618 "name": "raid_bdev1", 00:23:09.618 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:09.618 "strip_size_kb": 0, 00:23:09.618 "state": "configuring", 00:23:09.618 "raid_level": "raid1", 00:23:09.618 "superblock": true, 00:23:09.618 "num_base_bdevs": 2, 00:23:09.618 "num_base_bdevs_discovered": 1, 00:23:09.618 "num_base_bdevs_operational": 2, 00:23:09.618 "base_bdevs_list": [ 00:23:09.618 { 00:23:09.618 "name": "pt1", 00:23:09.618 "uuid": "58495e46-853c-555c-8ca5-94082bfcdbaa", 00:23:09.618 "is_configured": true, 00:23:09.618 "data_offset": 256, 00:23:09.618 "data_size": 7936 00:23:09.618 }, 00:23:09.618 { 00:23:09.618 "name": null, 00:23:09.618 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:09.618 "is_configured": false, 00:23:09.618 "data_offset": 256, 00:23:09.618 "data_size": 7936 00:23:09.618 } 00:23:09.618 ] 00:23:09.618 }' 00:23:09.618 23:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.618 23:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.187 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:10.187 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:10.187 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:10.187 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:10.445 [2024-05-14 23:38:33.604801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:10.445 [2024-05-14 23:38:33.604900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.445 [2024-05-14 23:38:33.604957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:23:10.445 [2024-05-14 23:38:33.604989] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.445 [2024-05-14 23:38:33.605379] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.445 [2024-05-14 23:38:33.605422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:10.445 [2024-05-14 23:38:33.605506] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:10.445 [2024-05-14 23:38:33.605531] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:10.446 [2024-05-14 23:38:33.605590] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:23:10.446 [2024-05-14 23:38:33.605602] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:10.446 [2024-05-14 23:38:33.605689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:10.446 [2024-05-14 23:38:33.605760] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:23:10.446 [2024-05-14 23:38:33.605772] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:23:10.446 [2024-05-14 23:38:33.605845] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.446 pt2 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.446 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.704 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.704 "name": "raid_bdev1", 00:23:10.704 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:10.704 "strip_size_kb": 0, 00:23:10.704 "state": "online", 00:23:10.704 "raid_level": "raid1", 00:23:10.704 "superblock": true, 00:23:10.704 "num_base_bdevs": 2, 00:23:10.704 "num_base_bdevs_discovered": 2, 00:23:10.704 "num_base_bdevs_operational": 2, 00:23:10.704 "base_bdevs_list": [ 00:23:10.704 { 00:23:10.704 "name": "pt1", 00:23:10.704 "uuid": "58495e46-853c-555c-8ca5-94082bfcdbaa", 00:23:10.704 "is_configured": true, 00:23:10.704 "data_offset": 256, 00:23:10.704 "data_size": 7936 00:23:10.704 }, 00:23:10.704 { 00:23:10.704 "name": "pt2", 00:23:10.704 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:10.704 "is_configured": true, 00:23:10.704 "data_offset": 256, 00:23:10.704 "data_size": 7936 00:23:10.704 } 00:23:10.704 ] 00:23:10.704 }' 00:23:10.704 23:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.704 23:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:11.639 [2024-05-14 23:38:34.789216] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.639 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:11.639 "name": "raid_bdev1", 00:23:11.639 "aliases": [ 00:23:11.639 "81f79340-594d-43bb-bbaf-33855af6f7bf" 00:23:11.639 ], 00:23:11.639 "product_name": "Raid Volume", 00:23:11.639 "block_size": 4096, 00:23:11.639 "num_blocks": 7936, 00:23:11.639 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:11.639 "md_size": 32, 00:23:11.639 "md_interleave": false, 00:23:11.639 "dif_type": 0, 00:23:11.639 "assigned_rate_limits": { 00:23:11.639 "rw_ios_per_sec": 0, 00:23:11.639 "rw_mbytes_per_sec": 0, 00:23:11.639 "r_mbytes_per_sec": 0, 00:23:11.639 "w_mbytes_per_sec": 0 00:23:11.639 }, 00:23:11.639 "claimed": false, 00:23:11.639 "zoned": false, 00:23:11.639 "supported_io_types": { 00:23:11.639 "read": true, 00:23:11.639 "write": true, 00:23:11.639 "unmap": false, 00:23:11.639 "write_zeroes": true, 00:23:11.639 "flush": false, 00:23:11.639 "reset": true, 00:23:11.639 "compare": false, 00:23:11.639 "compare_and_write": false, 00:23:11.639 "abort": false, 00:23:11.640 "nvme_admin": false, 00:23:11.640 "nvme_io": false 00:23:11.640 }, 00:23:11.640 "memory_domains": [ 00:23:11.640 { 00:23:11.640 "dma_device_id": "system", 00:23:11.640 "dma_device_type": 1 00:23:11.640 }, 00:23:11.640 { 00:23:11.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.640 "dma_device_type": 2 00:23:11.640 }, 00:23:11.640 { 00:23:11.640 "dma_device_id": "system", 00:23:11.640 "dma_device_type": 1 00:23:11.640 }, 00:23:11.640 { 00:23:11.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.640 "dma_device_type": 2 00:23:11.640 } 00:23:11.640 ], 00:23:11.640 "driver_specific": { 00:23:11.640 "raid": { 00:23:11.640 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:11.640 "strip_size_kb": 0, 00:23:11.640 "state": "online", 00:23:11.640 "raid_level": "raid1", 00:23:11.640 "superblock": true, 00:23:11.640 "num_base_bdevs": 2, 00:23:11.640 "num_base_bdevs_discovered": 2, 00:23:11.640 "num_base_bdevs_operational": 2, 00:23:11.640 "base_bdevs_list": [ 00:23:11.640 { 00:23:11.640 "name": "pt1", 00:23:11.640 "uuid": "58495e46-853c-555c-8ca5-94082bfcdbaa", 00:23:11.640 "is_configured": true, 00:23:11.640 "data_offset": 256, 00:23:11.640 "data_size": 7936 00:23:11.640 }, 00:23:11.640 { 00:23:11.640 "name": "pt2", 00:23:11.640 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:11.640 "is_configured": true, 00:23:11.640 "data_offset": 256, 00:23:11.640 "data_size": 7936 00:23:11.640 } 00:23:11.640 ] 00:23:11.640 } 00:23:11.640 } 00:23:11.640 }' 00:23:11.640 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:11.640 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:23:11.640 pt2' 00:23:11.640 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:11.640 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:11.640 23:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:11.898 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:11.898 "name": "pt1", 00:23:11.898 "aliases": [ 00:23:11.898 "58495e46-853c-555c-8ca5-94082bfcdbaa" 00:23:11.898 ], 00:23:11.898 "product_name": "passthru", 00:23:11.898 "block_size": 4096, 00:23:11.898 "num_blocks": 8192, 00:23:11.898 "uuid": "58495e46-853c-555c-8ca5-94082bfcdbaa", 00:23:11.898 "md_size": 32, 00:23:11.898 "md_interleave": false, 00:23:11.898 "dif_type": 0, 00:23:11.898 "assigned_rate_limits": { 00:23:11.898 "rw_ios_per_sec": 0, 00:23:11.898 "rw_mbytes_per_sec": 0, 00:23:11.899 "r_mbytes_per_sec": 0, 00:23:11.899 "w_mbytes_per_sec": 0 00:23:11.899 }, 00:23:11.899 "claimed": true, 00:23:11.899 "claim_type": "exclusive_write", 00:23:11.899 "zoned": false, 00:23:11.899 "supported_io_types": { 00:23:11.899 "read": true, 00:23:11.899 "write": true, 00:23:11.899 "unmap": true, 00:23:11.899 "write_zeroes": true, 00:23:11.899 "flush": true, 00:23:11.899 "reset": true, 00:23:11.899 "compare": false, 00:23:11.899 "compare_and_write": false, 00:23:11.899 "abort": true, 00:23:11.899 "nvme_admin": false, 00:23:11.899 "nvme_io": false 00:23:11.899 }, 00:23:11.899 "memory_domains": [ 00:23:11.899 { 00:23:11.899 "dma_device_id": "system", 00:23:11.899 "dma_device_type": 1 00:23:11.899 }, 00:23:11.899 { 00:23:11.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.899 "dma_device_type": 2 00:23:11.899 } 00:23:11.899 ], 00:23:11.899 "driver_specific": { 00:23:11.899 "passthru": { 00:23:11.899 "name": "pt1", 00:23:11.899 "base_bdev_name": "malloc1" 00:23:11.899 } 00:23:11.899 } 00:23:11.899 }' 00:23:11.899 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:11.899 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:12.157 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:12.157 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:12.157 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:12.157 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:12.157 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:12.157 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:12.417 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:23:12.417 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:12.417 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:12.417 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:12.417 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:12.417 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:12.417 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:12.676 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:12.676 "name": "pt2", 00:23:12.676 "aliases": [ 00:23:12.676 "fc85b49b-ed9a-58b8-9764-2217e7b90ba6" 00:23:12.676 ], 00:23:12.676 "product_name": "passthru", 00:23:12.676 "block_size": 4096, 00:23:12.676 "num_blocks": 8192, 00:23:12.676 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:12.676 "md_size": 32, 00:23:12.676 "md_interleave": false, 00:23:12.676 "dif_type": 0, 00:23:12.676 "assigned_rate_limits": { 00:23:12.676 "rw_ios_per_sec": 0, 00:23:12.676 "rw_mbytes_per_sec": 0, 00:23:12.676 "r_mbytes_per_sec": 0, 00:23:12.676 "w_mbytes_per_sec": 0 00:23:12.676 }, 00:23:12.676 "claimed": true, 00:23:12.676 "claim_type": "exclusive_write", 00:23:12.676 "zoned": false, 00:23:12.676 "supported_io_types": { 00:23:12.676 "read": true, 00:23:12.676 "write": true, 00:23:12.676 "unmap": true, 00:23:12.676 "write_zeroes": true, 00:23:12.676 "flush": true, 00:23:12.676 "reset": true, 00:23:12.676 "compare": false, 00:23:12.676 "compare_and_write": false, 00:23:12.676 "abort": true, 00:23:12.676 "nvme_admin": false, 00:23:12.676 "nvme_io": false 00:23:12.676 }, 00:23:12.676 "memory_domains": [ 00:23:12.676 { 00:23:12.676 "dma_device_id": "system", 00:23:12.676 "dma_device_type": 1 00:23:12.676 }, 00:23:12.676 { 00:23:12.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.676 "dma_device_type": 2 00:23:12.676 } 00:23:12.676 ], 00:23:12.676 "driver_specific": { 00:23:12.676 "passthru": { 00:23:12.676 "name": "pt2", 00:23:12.676 "base_bdev_name": "malloc2" 00:23:12.676 } 00:23:12.676 } 00:23:12.676 }' 00:23:12.676 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:12.676 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:12.935 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:23:12.935 23:38:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:12.935 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:12.935 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:12.935 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:12.935 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:13.193 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:23:13.193 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:13.193 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:13.193 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:13.193 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:13.193 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:13.453 [2024-05-14 23:38:36.589443] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:13.453 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 81f79340-594d-43bb-bbaf-33855af6f7bf '!=' 81f79340-594d-43bb-bbaf-33855af6f7bf ']' 00:23:13.453 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:13.453 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:23:13.453 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:23:13.453 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:13.713 [2024-05-14 23:38:36.797309] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.713 23:38:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.972 23:38:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.972 "name": "raid_bdev1", 00:23:13.972 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:13.972 "strip_size_kb": 0, 00:23:13.972 "state": "online", 00:23:13.972 "raid_level": "raid1", 00:23:13.972 "superblock": true, 00:23:13.972 "num_base_bdevs": 2, 00:23:13.972 "num_base_bdevs_discovered": 1, 00:23:13.972 "num_base_bdevs_operational": 1, 00:23:13.972 "base_bdevs_list": [ 00:23:13.972 { 00:23:13.972 "name": null, 00:23:13.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.972 "is_configured": false, 00:23:13.972 "data_offset": 256, 00:23:13.972 "data_size": 7936 00:23:13.972 }, 00:23:13.972 { 00:23:13.972 "name": "pt2", 00:23:13.972 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:13.972 "is_configured": true, 00:23:13.972 "data_offset": 256, 00:23:13.972 "data_size": 7936 00:23:13.972 } 00:23:13.972 ] 00:23:13.972 }' 00:23:13.972 23:38:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.972 23:38:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.539 23:38:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:14.797 [2024-05-14 23:38:38.029592] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.797 [2024-05-14 23:38:38.029640] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.797 [2024-05-14 23:38:38.029731] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.797 [2024-05-14 23:38:38.029782] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.797 [2024-05-14 23:38:38.029797] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:23:14.797 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.797 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:15.055 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:15.055 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:15.055 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:15.055 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:15.055 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:15.314 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:15.314 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:15.314 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:15.314 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:15.314 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:23:15.314 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:15.572 [2024-05-14 23:38:38.689598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:15.572 [2024-05-14 23:38:38.689742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.572 [2024-05-14 23:38:38.689793] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:23:15.572 [2024-05-14 23:38:38.689821] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.572 [2024-05-14 23:38:38.691846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.572 [2024-05-14 23:38:38.691907] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:15.572 [2024-05-14 23:38:38.692043] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:15.572 [2024-05-14 23:38:38.692131] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:15.572 [2024-05-14 23:38:38.692242] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:23:15.572 [2024-05-14 23:38:38.692262] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:15.572 [2024-05-14 23:38:38.692393] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:15.572 [2024-05-14 23:38:38.692511] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:23:15.572 [2024-05-14 23:38:38.692537] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:23:15.572 pt2 00:23:15.572 [2024-05-14 23:38:38.692655] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.572 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.572 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:15.572 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:15.572 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:15.572 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:15.572 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:15.572 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.572 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.573 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.573 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.573 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.573 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.830 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.830 "name": "raid_bdev1", 00:23:15.830 "uuid": "81f79340-594d-43bb-bbaf-33855af6f7bf", 00:23:15.830 "strip_size_kb": 0, 00:23:15.830 "state": "online", 00:23:15.830 "raid_level": "raid1", 00:23:15.830 "superblock": true, 00:23:15.830 "num_base_bdevs": 2, 00:23:15.830 "num_base_bdevs_discovered": 1, 00:23:15.830 "num_base_bdevs_operational": 1, 00:23:15.830 "base_bdevs_list": [ 00:23:15.830 { 00:23:15.830 "name": null, 00:23:15.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.830 "is_configured": false, 00:23:15.830 "data_offset": 256, 00:23:15.830 "data_size": 7936 00:23:15.830 }, 00:23:15.830 { 00:23:15.830 "name": "pt2", 00:23:15.830 "uuid": "fc85b49b-ed9a-58b8-9764-2217e7b90ba6", 00:23:15.830 "is_configured": true, 00:23:15.830 "data_offset": 256, 00:23:15.830 "data_size": 7936 00:23:15.830 } 00:23:15.830 ] 00:23:15.830 }' 00:23:15.831 23:38:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.831 23:38:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.397 23:38:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:23:16.397 23:38:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:16.397 23:38:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:23:16.655 [2024-05-14 23:38:39.901924] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.655 23:38:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # '[' 81f79340-594d-43bb-bbaf-33855af6f7bf '!=' 81f79340-594d-43bb-bbaf-33855af6f7bf ']' 00:23:16.655 23:38:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@568 -- # killprocess 74152 00:23:16.655 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 74152 ']' 00:23:16.655 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 74152 00:23:16.655 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:23:16.655 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:16.655 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74152 00:23:16.914 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:16.914 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:16.914 killing process with pid 74152 00:23:16.914 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74152' 00:23:16.914 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 74152 00:23:16.914 23:38:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 74152 00:23:16.914 [2024-05-14 23:38:39.944406] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:16.914 [2024-05-14 23:38:39.944484] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.914 [2024-05-14 23:38:39.944528] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.914 [2024-05-14 23:38:39.944542] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:23:16.914 [2024-05-14 23:38:40.121309] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:18.291 ************************************ 00:23:18.291 END TEST raid_superblock_test_md_separate 00:23:18.291 ************************************ 00:23:18.291 23:38:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # return 0 00:23:18.291 00:23:18.291 real 0m15.328s 00:23:18.291 user 0m28.164s 00:23:18.291 sys 0m1.542s 00:23:18.291 23:38:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:18.291 23:38:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.291 23:38:41 bdev_raid -- bdev/bdev_raid.sh@853 -- # '[' '' = true ']' 00:23:18.291 23:38:41 bdev_raid -- bdev/bdev_raid.sh@857 -- # base_malloc_params='-m 32 -i' 00:23:18.291 23:38:41 bdev_raid -- bdev/bdev_raid.sh@858 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:18.291 23:38:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:18.291 23:38:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:18.291 23:38:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.291 ************************************ 00:23:18.291 START TEST raid_state_function_test_sb_md_interleaved 00:23:18.291 ************************************ 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # echo BaseBdev1 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # echo BaseBdev2 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:18.291 Process raid pid: 74632 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # raid_pid=74632 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 74632' 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@247 -- # waitforlisten 74632 /var/tmp/spdk-raid.sock 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:18.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 74632 ']' 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:18.291 23:38:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:18.291 [2024-05-14 23:38:41.521351] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:23:18.291 [2024-05-14 23:38:41.521542] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.549 [2024-05-14 23:38:41.683648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.807 [2024-05-14 23:38:41.965078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.065 [2024-05-14 23:38:42.163534] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.065 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.065 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:23:19.066 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:19.324 [2024-05-14 23:38:42.568466] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:19.324 [2024-05-14 23:38:42.568548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:19.324 [2024-05-14 23:38:42.568580] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:19.324 [2024-05-14 23:38:42.568600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.324 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.583 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.583 "name": "Existed_Raid", 00:23:19.583 "uuid": "f2b96d9a-e496-4d15-85e1-08010a8d672a", 00:23:19.583 "strip_size_kb": 0, 00:23:19.583 "state": "configuring", 00:23:19.583 "raid_level": "raid1", 00:23:19.583 "superblock": true, 00:23:19.583 "num_base_bdevs": 2, 00:23:19.583 "num_base_bdevs_discovered": 0, 00:23:19.583 "num_base_bdevs_operational": 2, 00:23:19.583 "base_bdevs_list": [ 00:23:19.583 { 00:23:19.583 "name": "BaseBdev1", 00:23:19.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.583 "is_configured": false, 00:23:19.583 "data_offset": 0, 00:23:19.583 "data_size": 0 00:23:19.583 }, 00:23:19.583 { 00:23:19.583 "name": "BaseBdev2", 00:23:19.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.583 "is_configured": false, 00:23:19.583 "data_offset": 0, 00:23:19.583 "data_size": 0 00:23:19.583 } 00:23:19.583 ] 00:23:19.583 }' 00:23:19.583 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.583 23:38:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 23:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:20.517 [2024-05-14 23:38:43.732482] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:20.517 [2024-05-14 23:38:43.732530] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state configuring 00:23:20.518 23:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:20.776 [2024-05-14 23:38:44.012569] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:20.776 [2024-05-14 23:38:44.012693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:20.776 [2024-05-14 23:38:44.012711] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:20.776 [2024-05-14 23:38:44.012738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:20.776 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:21.035 BaseBdev1 00:23:21.035 [2024-05-14 23:38:44.293378] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.036 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:23:21.036 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:21.036 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:21.036 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:23:21.036 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:21.036 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:21.036 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:21.295 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:21.553 [ 00:23:21.553 { 00:23:21.553 "name": "BaseBdev1", 00:23:21.553 "aliases": [ 00:23:21.553 "0ee542d6-df7a-4fe9-a248-cf195ada006e" 00:23:21.553 ], 00:23:21.553 "product_name": "Malloc disk", 00:23:21.553 "block_size": 4128, 00:23:21.553 "num_blocks": 8192, 00:23:21.553 "uuid": "0ee542d6-df7a-4fe9-a248-cf195ada006e", 00:23:21.553 "md_size": 32, 00:23:21.553 "md_interleave": true, 00:23:21.553 "dif_type": 0, 00:23:21.553 "assigned_rate_limits": { 00:23:21.553 "rw_ios_per_sec": 0, 00:23:21.554 "rw_mbytes_per_sec": 0, 00:23:21.554 "r_mbytes_per_sec": 0, 00:23:21.554 "w_mbytes_per_sec": 0 00:23:21.554 }, 00:23:21.554 "claimed": true, 00:23:21.554 "claim_type": "exclusive_write", 00:23:21.554 "zoned": false, 00:23:21.554 "supported_io_types": { 00:23:21.554 "read": true, 00:23:21.554 "write": true, 00:23:21.554 "unmap": true, 00:23:21.554 "write_zeroes": true, 00:23:21.554 "flush": true, 00:23:21.554 "reset": true, 00:23:21.554 "compare": false, 00:23:21.554 "compare_and_write": false, 00:23:21.554 "abort": true, 00:23:21.554 "nvme_admin": false, 00:23:21.554 "nvme_io": false 00:23:21.554 }, 00:23:21.554 "memory_domains": [ 00:23:21.554 { 00:23:21.554 "dma_device_id": "system", 00:23:21.554 "dma_device_type": 1 00:23:21.554 }, 00:23:21.554 { 00:23:21.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.554 "dma_device_type": 2 00:23:21.554 } 00:23:21.554 ], 00:23:21.554 "driver_specific": {} 00:23:21.554 } 00:23:21.554 ] 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.554 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.813 23:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.071 23:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.071 "name": "Existed_Raid", 00:23:22.071 "uuid": "9cc85169-c1fe-4664-9190-1382cf822959", 00:23:22.071 "strip_size_kb": 0, 00:23:22.072 "state": "configuring", 00:23:22.072 "raid_level": "raid1", 00:23:22.072 "superblock": true, 00:23:22.072 "num_base_bdevs": 2, 00:23:22.072 "num_base_bdevs_discovered": 1, 00:23:22.072 "num_base_bdevs_operational": 2, 00:23:22.072 "base_bdevs_list": [ 00:23:22.072 { 00:23:22.072 "name": "BaseBdev1", 00:23:22.072 "uuid": "0ee542d6-df7a-4fe9-a248-cf195ada006e", 00:23:22.072 "is_configured": true, 00:23:22.072 "data_offset": 256, 00:23:22.072 "data_size": 7936 00:23:22.072 }, 00:23:22.072 { 00:23:22.072 "name": "BaseBdev2", 00:23:22.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.072 "is_configured": false, 00:23:22.072 "data_offset": 0, 00:23:22.072 "data_size": 0 00:23:22.072 } 00:23:22.072 ] 00:23:22.072 }' 00:23:22.072 23:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.072 23:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.639 23:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:22.897 [2024-05-14 23:38:45.973761] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:22.897 [2024-05-14 23:38:45.973824] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name Existed_Raid, state configuring 00:23:22.897 23:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:22.897 [2024-05-14 23:38:46.173851] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.897 [2024-05-14 23:38:46.175564] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.897 [2024-05-14 23:38:46.175637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.156 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.414 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.414 "name": "Existed_Raid", 00:23:23.414 "uuid": "9f66d6fd-2699-4935-9baa-57c647b45a9f", 00:23:23.414 "strip_size_kb": 0, 00:23:23.414 "state": "configuring", 00:23:23.414 "raid_level": "raid1", 00:23:23.414 "superblock": true, 00:23:23.414 "num_base_bdevs": 2, 00:23:23.414 "num_base_bdevs_discovered": 1, 00:23:23.414 "num_base_bdevs_operational": 2, 00:23:23.414 "base_bdevs_list": [ 00:23:23.414 { 00:23:23.414 "name": "BaseBdev1", 00:23:23.414 "uuid": "0ee542d6-df7a-4fe9-a248-cf195ada006e", 00:23:23.414 "is_configured": true, 00:23:23.414 "data_offset": 256, 00:23:23.414 "data_size": 7936 00:23:23.414 }, 00:23:23.414 { 00:23:23.414 "name": "BaseBdev2", 00:23:23.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.414 "is_configured": false, 00:23:23.414 "data_offset": 0, 00:23:23.414 "data_size": 0 00:23:23.414 } 00:23:23.414 ] 00:23:23.414 }' 00:23:23.414 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.414 23:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:23.981 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:24.240 [2024-05-14 23:38:47.503628] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:24.240 [2024-05-14 23:38:47.503811] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:23:24.240 [2024-05-14 23:38:47.503839] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:24.240 [2024-05-14 23:38:47.503913] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:23:24.240 [2024-05-14 23:38:47.503975] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:23:24.240 [2024-05-14 23:38:47.503988] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011c00 00:23:24.240 [2024-05-14 23:38:47.504038] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.240 BaseBdev2 00:23:24.240 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:23:24.240 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:24.240 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:24.240 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:23:24.241 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:24.241 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:24.241 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:24.500 23:38:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:24.760 [ 00:23:24.760 { 00:23:24.760 "name": "BaseBdev2", 00:23:24.760 "aliases": [ 00:23:24.760 "832bbc57-e1e5-4098-8593-9518648e7510" 00:23:24.760 ], 00:23:24.760 "product_name": "Malloc disk", 00:23:24.760 "block_size": 4128, 00:23:24.760 "num_blocks": 8192, 00:23:24.760 "uuid": "832bbc57-e1e5-4098-8593-9518648e7510", 00:23:24.760 "md_size": 32, 00:23:24.760 "md_interleave": true, 00:23:24.760 "dif_type": 0, 00:23:24.760 "assigned_rate_limits": { 00:23:24.760 "rw_ios_per_sec": 0, 00:23:24.760 "rw_mbytes_per_sec": 0, 00:23:24.760 "r_mbytes_per_sec": 0, 00:23:24.760 "w_mbytes_per_sec": 0 00:23:24.760 }, 00:23:24.760 "claimed": true, 00:23:24.760 "claim_type": "exclusive_write", 00:23:24.760 "zoned": false, 00:23:24.760 "supported_io_types": { 00:23:24.760 "read": true, 00:23:24.760 "write": true, 00:23:24.760 "unmap": true, 00:23:24.760 "write_zeroes": true, 00:23:24.760 "flush": true, 00:23:24.760 "reset": true, 00:23:24.760 "compare": false, 00:23:24.760 "compare_and_write": false, 00:23:24.760 "abort": true, 00:23:24.760 "nvme_admin": false, 00:23:24.760 "nvme_io": false 00:23:24.760 }, 00:23:24.760 "memory_domains": [ 00:23:24.760 { 00:23:24.760 "dma_device_id": "system", 00:23:24.760 "dma_device_type": 1 00:23:24.760 }, 00:23:24.760 { 00:23:24.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.760 "dma_device_type": 2 00:23:24.760 } 00:23:24.760 ], 00:23:24.760 "driver_specific": {} 00:23:24.760 } 00:23:24.760 ] 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.760 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.019 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:25.019 "name": "Existed_Raid", 00:23:25.019 "uuid": "9f66d6fd-2699-4935-9baa-57c647b45a9f", 00:23:25.019 "strip_size_kb": 0, 00:23:25.019 "state": "online", 00:23:25.019 "raid_level": "raid1", 00:23:25.019 "superblock": true, 00:23:25.019 "num_base_bdevs": 2, 00:23:25.019 "num_base_bdevs_discovered": 2, 00:23:25.019 "num_base_bdevs_operational": 2, 00:23:25.019 "base_bdevs_list": [ 00:23:25.019 { 00:23:25.019 "name": "BaseBdev1", 00:23:25.019 "uuid": "0ee542d6-df7a-4fe9-a248-cf195ada006e", 00:23:25.019 "is_configured": true, 00:23:25.019 "data_offset": 256, 00:23:25.019 "data_size": 7936 00:23:25.019 }, 00:23:25.019 { 00:23:25.019 "name": "BaseBdev2", 00:23:25.019 "uuid": "832bbc57-e1e5-4098-8593-9518648e7510", 00:23:25.019 "is_configured": true, 00:23:25.019 "data_offset": 256, 00:23:25.019 "data_size": 7936 00:23:25.019 } 00:23:25.019 ] 00:23:25.019 }' 00:23:25.019 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:25.019 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:25.956 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:23:25.956 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:23:25.956 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:25.956 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:25.956 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:25.956 23:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:23:25.956 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:25.956 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:25.956 [2024-05-14 23:38:49.228122] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:26.215 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:26.215 "name": "Existed_Raid", 00:23:26.215 "aliases": [ 00:23:26.215 "9f66d6fd-2699-4935-9baa-57c647b45a9f" 00:23:26.215 ], 00:23:26.215 "product_name": "Raid Volume", 00:23:26.215 "block_size": 4128, 00:23:26.215 "num_blocks": 7936, 00:23:26.215 "uuid": "9f66d6fd-2699-4935-9baa-57c647b45a9f", 00:23:26.215 "md_size": 32, 00:23:26.215 "md_interleave": true, 00:23:26.215 "dif_type": 0, 00:23:26.215 "assigned_rate_limits": { 00:23:26.215 "rw_ios_per_sec": 0, 00:23:26.215 "rw_mbytes_per_sec": 0, 00:23:26.215 "r_mbytes_per_sec": 0, 00:23:26.215 "w_mbytes_per_sec": 0 00:23:26.215 }, 00:23:26.215 "claimed": false, 00:23:26.215 "zoned": false, 00:23:26.215 "supported_io_types": { 00:23:26.215 "read": true, 00:23:26.215 "write": true, 00:23:26.215 "unmap": false, 00:23:26.215 "write_zeroes": true, 00:23:26.215 "flush": false, 00:23:26.215 "reset": true, 00:23:26.215 "compare": false, 00:23:26.215 "compare_and_write": false, 00:23:26.215 "abort": false, 00:23:26.215 "nvme_admin": false, 00:23:26.215 "nvme_io": false 00:23:26.215 }, 00:23:26.215 "memory_domains": [ 00:23:26.215 { 00:23:26.215 "dma_device_id": "system", 00:23:26.215 "dma_device_type": 1 00:23:26.215 }, 00:23:26.215 { 00:23:26.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.215 "dma_device_type": 2 00:23:26.215 }, 00:23:26.215 { 00:23:26.215 "dma_device_id": "system", 00:23:26.215 "dma_device_type": 1 00:23:26.215 }, 00:23:26.215 { 00:23:26.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.215 "dma_device_type": 2 00:23:26.215 } 00:23:26.215 ], 00:23:26.215 "driver_specific": { 00:23:26.215 "raid": { 00:23:26.215 "uuid": "9f66d6fd-2699-4935-9baa-57c647b45a9f", 00:23:26.215 "strip_size_kb": 0, 00:23:26.215 "state": "online", 00:23:26.215 "raid_level": "raid1", 00:23:26.215 "superblock": true, 00:23:26.215 "num_base_bdevs": 2, 00:23:26.215 "num_base_bdevs_discovered": 2, 00:23:26.216 "num_base_bdevs_operational": 2, 00:23:26.216 "base_bdevs_list": [ 00:23:26.216 { 00:23:26.216 "name": "BaseBdev1", 00:23:26.216 "uuid": "0ee542d6-df7a-4fe9-a248-cf195ada006e", 00:23:26.216 "is_configured": true, 00:23:26.216 "data_offset": 256, 00:23:26.216 "data_size": 7936 00:23:26.216 }, 00:23:26.216 { 00:23:26.216 "name": "BaseBdev2", 00:23:26.216 "uuid": "832bbc57-e1e5-4098-8593-9518648e7510", 00:23:26.216 "is_configured": true, 00:23:26.216 "data_offset": 256, 00:23:26.216 "data_size": 7936 00:23:26.216 } 00:23:26.216 ] 00:23:26.216 } 00:23:26.216 } 00:23:26.216 }' 00:23:26.216 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:26.216 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:23:26.216 BaseBdev2' 00:23:26.216 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:26.216 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:26.216 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:26.474 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:26.474 "name": "BaseBdev1", 00:23:26.474 "aliases": [ 00:23:26.474 "0ee542d6-df7a-4fe9-a248-cf195ada006e" 00:23:26.474 ], 00:23:26.474 "product_name": "Malloc disk", 00:23:26.474 "block_size": 4128, 00:23:26.474 "num_blocks": 8192, 00:23:26.474 "uuid": "0ee542d6-df7a-4fe9-a248-cf195ada006e", 00:23:26.474 "md_size": 32, 00:23:26.474 "md_interleave": true, 00:23:26.474 "dif_type": 0, 00:23:26.474 "assigned_rate_limits": { 00:23:26.474 "rw_ios_per_sec": 0, 00:23:26.474 "rw_mbytes_per_sec": 0, 00:23:26.474 "r_mbytes_per_sec": 0, 00:23:26.474 "w_mbytes_per_sec": 0 00:23:26.474 }, 00:23:26.474 "claimed": true, 00:23:26.474 "claim_type": "exclusive_write", 00:23:26.474 "zoned": false, 00:23:26.474 "supported_io_types": { 00:23:26.474 "read": true, 00:23:26.474 "write": true, 00:23:26.474 "unmap": true, 00:23:26.474 "write_zeroes": true, 00:23:26.474 "flush": true, 00:23:26.474 "reset": true, 00:23:26.474 "compare": false, 00:23:26.474 "compare_and_write": false, 00:23:26.474 "abort": true, 00:23:26.474 "nvme_admin": false, 00:23:26.474 "nvme_io": false 00:23:26.474 }, 00:23:26.474 "memory_domains": [ 00:23:26.474 { 00:23:26.474 "dma_device_id": "system", 00:23:26.474 "dma_device_type": 1 00:23:26.474 }, 00:23:26.474 { 00:23:26.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.474 "dma_device_type": 2 00:23:26.474 } 00:23:26.474 ], 00:23:26.474 "driver_specific": {} 00:23:26.474 }' 00:23:26.474 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:26.474 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:26.474 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:23:26.474 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:26.474 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:26.732 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:26.732 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:26.732 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:26.732 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:23:26.733 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:26.733 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:26.733 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:26.733 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:26.733 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:26.733 23:38:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:26.991 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:26.991 "name": "BaseBdev2", 00:23:26.991 "aliases": [ 00:23:26.991 "832bbc57-e1e5-4098-8593-9518648e7510" 00:23:26.991 ], 00:23:26.991 "product_name": "Malloc disk", 00:23:26.991 "block_size": 4128, 00:23:26.991 "num_blocks": 8192, 00:23:26.991 "uuid": "832bbc57-e1e5-4098-8593-9518648e7510", 00:23:26.991 "md_size": 32, 00:23:26.991 "md_interleave": true, 00:23:26.991 "dif_type": 0, 00:23:26.991 "assigned_rate_limits": { 00:23:26.991 "rw_ios_per_sec": 0, 00:23:26.991 "rw_mbytes_per_sec": 0, 00:23:26.991 "r_mbytes_per_sec": 0, 00:23:26.991 "w_mbytes_per_sec": 0 00:23:26.991 }, 00:23:26.991 "claimed": true, 00:23:26.991 "claim_type": "exclusive_write", 00:23:26.991 "zoned": false, 00:23:26.991 "supported_io_types": { 00:23:26.991 "read": true, 00:23:26.991 "write": true, 00:23:26.991 "unmap": true, 00:23:26.991 "write_zeroes": true, 00:23:26.991 "flush": true, 00:23:26.991 "reset": true, 00:23:26.991 "compare": false, 00:23:26.991 "compare_and_write": false, 00:23:26.991 "abort": true, 00:23:26.991 "nvme_admin": false, 00:23:26.991 "nvme_io": false 00:23:26.991 }, 00:23:26.991 "memory_domains": [ 00:23:26.991 { 00:23:26.991 "dma_device_id": "system", 00:23:26.991 "dma_device_type": 1 00:23:26.991 }, 00:23:26.991 { 00:23:26.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.991 "dma_device_type": 2 00:23:26.991 } 00:23:26.991 ], 00:23:26.991 "driver_specific": {} 00:23:26.991 }' 00:23:26.991 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:26.991 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:27.249 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:23:27.249 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:27.249 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:27.249 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:27.249 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:27.249 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:27.508 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:23:27.508 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:27.508 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:27.508 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:27.508 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:27.768 [2024-05-14 23:38:50.820225] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # local expected_state 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.768 23:38:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.032 23:38:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.032 "name": "Existed_Raid", 00:23:28.032 "uuid": "9f66d6fd-2699-4935-9baa-57c647b45a9f", 00:23:28.032 "strip_size_kb": 0, 00:23:28.032 "state": "online", 00:23:28.032 "raid_level": "raid1", 00:23:28.032 "superblock": true, 00:23:28.032 "num_base_bdevs": 2, 00:23:28.032 "num_base_bdevs_discovered": 1, 00:23:28.032 "num_base_bdevs_operational": 1, 00:23:28.032 "base_bdevs_list": [ 00:23:28.032 { 00:23:28.032 "name": null, 00:23:28.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.032 "is_configured": false, 00:23:28.032 "data_offset": 256, 00:23:28.032 "data_size": 7936 00:23:28.032 }, 00:23:28.032 { 00:23:28.032 "name": "BaseBdev2", 00:23:28.032 "uuid": "832bbc57-e1e5-4098-8593-9518648e7510", 00:23:28.032 "is_configured": true, 00:23:28.032 "data_offset": 256, 00:23:28.032 "data_size": 7936 00:23:28.032 } 00:23:28.032 ] 00:23:28.032 }' 00:23:28.032 23:38:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.032 23:38:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.617 23:38:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:28.617 23:38:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:28.617 23:38:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.617 23:38:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:23:28.876 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:23:28.876 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:28.876 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@292 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:29.134 [2024-05-14 23:38:52.273204] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:29.134 [2024-05-14 23:38:52.273302] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:29.134 [2024-05-14 23:38:52.355475] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:29.134 [2024-05-14 23:38:52.355571] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:29.134 [2024-05-14 23:38:52.355587] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name Existed_Raid, state offline 00:23:29.134 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:29.134 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:29.134 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.134 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@342 -- # killprocess 74632 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 74632 ']' 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 74632 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74632 00:23:29.393 killing process with pid 74632 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74632' 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 74632 00:23:29.393 23:38:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 74632 00:23:29.393 [2024-05-14 23:38:52.585905] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:29.393 [2024-05-14 23:38:52.586008] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:30.770 23:38:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@344 -- # return 0 00:23:30.770 ************************************ 00:23:30.770 END TEST raid_state_function_test_sb_md_interleaved 00:23:30.770 ************************************ 00:23:30.770 00:23:30.770 real 0m12.403s 00:23:30.770 user 0m22.090s 00:23:30.770 sys 0m1.248s 00:23:30.770 23:38:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:30.770 23:38:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:30.770 23:38:53 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:30.770 23:38:53 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:30.770 23:38:53 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:30.770 23:38:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:30.770 ************************************ 00:23:30.770 START TEST raid_superblock_test_md_interleaved 00:23:30.770 ************************************ 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:30.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=75008 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 75008 /var/tmp/spdk-raid.sock 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 75008 ']' 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.770 23:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:30.770 [2024-05-14 23:38:53.976571] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:23:30.770 [2024-05-14 23:38:53.976796] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75008 ] 00:23:31.029 [2024-05-14 23:38:54.142943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.287 [2024-05-14 23:38:54.402339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.546 [2024-05-14 23:38:54.614223] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:31.546 23:38:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:31.805 malloc1 00:23:31.805 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:32.064 [2024-05-14 23:38:55.238733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:32.064 [2024-05-14 23:38:55.238865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.064 [2024-05-14 23:38:55.238923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:23:32.064 [2024-05-14 23:38:55.238969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.064 [2024-05-14 23:38:55.240761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.064 [2024-05-14 23:38:55.240810] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:32.064 pt1 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.064 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:32.322 malloc2 00:23:32.322 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:32.581 [2024-05-14 23:38:55.652569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:32.581 [2024-05-14 23:38:55.652666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.581 [2024-05-14 23:38:55.652717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:23:32.581 [2024-05-14 23:38:55.652772] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.581 [2024-05-14 23:38:55.654434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.581 [2024-05-14 23:38:55.654492] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:32.581 pt2 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:23:32.581 [2024-05-14 23:38:55.844677] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:32.581 [2024-05-14 23:38:55.846155] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:32.581 [2024-05-14 23:38:55.846472] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:23:32.581 [2024-05-14 23:38:55.846494] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:32.581 [2024-05-14 23:38:55.846587] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:32.581 [2024-05-14 23:38:55.846650] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:23:32.581 [2024-05-14 23:38:55.846664] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:23:32.581 [2024-05-14 23:38:55.846716] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.581 23:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.839 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.839 "name": "raid_bdev1", 00:23:32.839 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:32.839 "strip_size_kb": 0, 00:23:32.839 "state": "online", 00:23:32.839 "raid_level": "raid1", 00:23:32.839 "superblock": true, 00:23:32.839 "num_base_bdevs": 2, 00:23:32.839 "num_base_bdevs_discovered": 2, 00:23:32.839 "num_base_bdevs_operational": 2, 00:23:32.839 "base_bdevs_list": [ 00:23:32.839 { 00:23:32.839 "name": "pt1", 00:23:32.839 "uuid": "200a00da-c59a-59f2-b589-c2fe40abaee2", 00:23:32.839 "is_configured": true, 00:23:32.839 "data_offset": 256, 00:23:32.839 "data_size": 7936 00:23:32.839 }, 00:23:32.839 { 00:23:32.839 "name": "pt2", 00:23:32.839 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:32.839 "is_configured": true, 00:23:32.839 "data_offset": 256, 00:23:32.839 "data_size": 7936 00:23:32.839 } 00:23:32.839 ] 00:23:32.839 }' 00:23:32.839 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.839 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:33.774 [2024-05-14 23:38:56.968984] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:33.774 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:33.774 "name": "raid_bdev1", 00:23:33.774 "aliases": [ 00:23:33.774 "da61c02d-da84-4b1e-a6b1-21f1a4b4164d" 00:23:33.774 ], 00:23:33.774 "product_name": "Raid Volume", 00:23:33.774 "block_size": 4128, 00:23:33.774 "num_blocks": 7936, 00:23:33.774 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:33.774 "md_size": 32, 00:23:33.774 "md_interleave": true, 00:23:33.774 "dif_type": 0, 00:23:33.774 "assigned_rate_limits": { 00:23:33.774 "rw_ios_per_sec": 0, 00:23:33.774 "rw_mbytes_per_sec": 0, 00:23:33.774 "r_mbytes_per_sec": 0, 00:23:33.774 "w_mbytes_per_sec": 0 00:23:33.774 }, 00:23:33.774 "claimed": false, 00:23:33.774 "zoned": false, 00:23:33.774 "supported_io_types": { 00:23:33.774 "read": true, 00:23:33.774 "write": true, 00:23:33.774 "unmap": false, 00:23:33.774 "write_zeroes": true, 00:23:33.774 "flush": false, 00:23:33.774 "reset": true, 00:23:33.774 "compare": false, 00:23:33.774 "compare_and_write": false, 00:23:33.774 "abort": false, 00:23:33.774 "nvme_admin": false, 00:23:33.774 "nvme_io": false 00:23:33.774 }, 00:23:33.774 "memory_domains": [ 00:23:33.774 { 00:23:33.774 "dma_device_id": "system", 00:23:33.774 "dma_device_type": 1 00:23:33.774 }, 00:23:33.774 { 00:23:33.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.774 "dma_device_type": 2 00:23:33.774 }, 00:23:33.774 { 00:23:33.774 "dma_device_id": "system", 00:23:33.774 "dma_device_type": 1 00:23:33.774 }, 00:23:33.774 { 00:23:33.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.774 "dma_device_type": 2 00:23:33.774 } 00:23:33.774 ], 00:23:33.774 "driver_specific": { 00:23:33.774 "raid": { 00:23:33.774 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:33.774 "strip_size_kb": 0, 00:23:33.774 "state": "online", 00:23:33.774 "raid_level": "raid1", 00:23:33.774 "superblock": true, 00:23:33.774 "num_base_bdevs": 2, 00:23:33.774 "num_base_bdevs_discovered": 2, 00:23:33.774 "num_base_bdevs_operational": 2, 00:23:33.774 "base_bdevs_list": [ 00:23:33.774 { 00:23:33.774 "name": "pt1", 00:23:33.774 "uuid": "200a00da-c59a-59f2-b589-c2fe40abaee2", 00:23:33.774 "is_configured": true, 00:23:33.774 "data_offset": 256, 00:23:33.774 "data_size": 7936 00:23:33.774 }, 00:23:33.774 { 00:23:33.775 "name": "pt2", 00:23:33.775 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:33.775 "is_configured": true, 00:23:33.775 "data_offset": 256, 00:23:33.775 "data_size": 7936 00:23:33.775 } 00:23:33.775 ] 00:23:33.775 } 00:23:33.775 } 00:23:33.775 }' 00:23:33.775 23:38:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:33.775 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:23:33.775 pt2' 00:23:33.775 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:33.775 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:33.775 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:34.039 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:34.039 "name": "pt1", 00:23:34.039 "aliases": [ 00:23:34.039 "200a00da-c59a-59f2-b589-c2fe40abaee2" 00:23:34.039 ], 00:23:34.039 "product_name": "passthru", 00:23:34.039 "block_size": 4128, 00:23:34.039 "num_blocks": 8192, 00:23:34.039 "uuid": "200a00da-c59a-59f2-b589-c2fe40abaee2", 00:23:34.039 "md_size": 32, 00:23:34.039 "md_interleave": true, 00:23:34.039 "dif_type": 0, 00:23:34.039 "assigned_rate_limits": { 00:23:34.039 "rw_ios_per_sec": 0, 00:23:34.039 "rw_mbytes_per_sec": 0, 00:23:34.039 "r_mbytes_per_sec": 0, 00:23:34.039 "w_mbytes_per_sec": 0 00:23:34.039 }, 00:23:34.039 "claimed": true, 00:23:34.039 "claim_type": "exclusive_write", 00:23:34.039 "zoned": false, 00:23:34.039 "supported_io_types": { 00:23:34.039 "read": true, 00:23:34.039 "write": true, 00:23:34.039 "unmap": true, 00:23:34.040 "write_zeroes": true, 00:23:34.040 "flush": true, 00:23:34.040 "reset": true, 00:23:34.040 "compare": false, 00:23:34.040 "compare_and_write": false, 00:23:34.040 "abort": true, 00:23:34.040 "nvme_admin": false, 00:23:34.040 "nvme_io": false 00:23:34.040 }, 00:23:34.040 "memory_domains": [ 00:23:34.040 { 00:23:34.040 "dma_device_id": "system", 00:23:34.040 "dma_device_type": 1 00:23:34.040 }, 00:23:34.040 { 00:23:34.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.040 "dma_device_type": 2 00:23:34.040 } 00:23:34.040 ], 00:23:34.040 "driver_specific": { 00:23:34.040 "passthru": { 00:23:34.040 "name": "pt1", 00:23:34.040 "base_bdev_name": "malloc1" 00:23:34.040 } 00:23:34.040 } 00:23:34.040 }' 00:23:34.040 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:34.307 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:34.307 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:23:34.307 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:34.307 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:34.307 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:34.307 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:34.307 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:34.565 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:23:34.565 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:34.565 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:34.565 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:34.565 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:34.565 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:34.565 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:34.824 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:34.824 "name": "pt2", 00:23:34.824 "aliases": [ 00:23:34.824 "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196" 00:23:34.824 ], 00:23:34.824 "product_name": "passthru", 00:23:34.824 "block_size": 4128, 00:23:34.824 "num_blocks": 8192, 00:23:34.824 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:34.824 "md_size": 32, 00:23:34.824 "md_interleave": true, 00:23:34.824 "dif_type": 0, 00:23:34.824 "assigned_rate_limits": { 00:23:34.824 "rw_ios_per_sec": 0, 00:23:34.824 "rw_mbytes_per_sec": 0, 00:23:34.824 "r_mbytes_per_sec": 0, 00:23:34.824 "w_mbytes_per_sec": 0 00:23:34.824 }, 00:23:34.824 "claimed": true, 00:23:34.824 "claim_type": "exclusive_write", 00:23:34.824 "zoned": false, 00:23:34.824 "supported_io_types": { 00:23:34.824 "read": true, 00:23:34.824 "write": true, 00:23:34.824 "unmap": true, 00:23:34.824 "write_zeroes": true, 00:23:34.824 "flush": true, 00:23:34.824 "reset": true, 00:23:34.824 "compare": false, 00:23:34.824 "compare_and_write": false, 00:23:34.824 "abort": true, 00:23:34.824 "nvme_admin": false, 00:23:34.824 "nvme_io": false 00:23:34.824 }, 00:23:34.824 "memory_domains": [ 00:23:34.824 { 00:23:34.824 "dma_device_id": "system", 00:23:34.824 "dma_device_type": 1 00:23:34.824 }, 00:23:34.824 { 00:23:34.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.824 "dma_device_type": 2 00:23:34.824 } 00:23:34.824 ], 00:23:34.824 "driver_specific": { 00:23:34.824 "passthru": { 00:23:34.824 "name": "pt2", 00:23:34.824 "base_bdev_name": "malloc2" 00:23:34.824 } 00:23:34.824 } 00:23:34.824 }' 00:23:34.824 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:34.824 23:38:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:34.824 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:23:34.824 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:34.824 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:35.083 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:35.083 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:35.083 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:35.083 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:23:35.083 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:35.083 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:35.342 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:35.342 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:35.342 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:35.342 [2024-05-14 23:38:58.573126] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.342 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=da61c02d-da84-4b1e-a6b1-21f1a4b4164d 00:23:35.342 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z da61c02d-da84-4b1e-a6b1-21f1a4b4164d ']' 00:23:35.342 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:35.601 [2024-05-14 23:38:58.817004] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.601 [2024-05-14 23:38:58.817041] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.601 [2024-05-14 23:38:58.817113] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.601 [2024-05-14 23:38:58.817159] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.601 [2024-05-14 23:38:58.817170] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:23:35.601 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:35.601 23:38:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.860 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:35.860 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:35.860 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:35.860 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:36.119 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:36.119 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:36.377 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:36.377 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:36.636 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:36.895 [2024-05-14 23:38:59.961244] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:36.895 [2024-05-14 23:38:59.962898] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:36.895 [2024-05-14 23:38:59.962958] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:36.895 [2024-05-14 23:38:59.963037] bdev_raid.c:3030:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:36.895 [2024-05-14 23:38:59.963078] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.895 [2024-05-14 23:38:59.963092] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:23:36.895 request: 00:23:36.895 { 00:23:36.895 "name": "raid_bdev1", 00:23:36.895 "raid_level": "raid1", 00:23:36.895 "base_bdevs": [ 00:23:36.895 "malloc1", 00:23:36.895 "malloc2" 00:23:36.895 ], 00:23:36.895 "superblock": false, 00:23:36.895 "method": "bdev_raid_create", 00:23:36.895 "req_id": 1 00:23:36.895 } 00:23:36.895 Got JSON-RPC error response 00:23:36.895 response: 00:23:36.895 { 00:23:36.895 "code": -17, 00:23:36.895 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:36.895 } 00:23:36.895 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:23:36.895 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.895 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.895 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.895 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:36.895 23:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.152 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:37.152 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:37.152 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:37.410 [2024-05-14 23:39:00.441236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:37.410 [2024-05-14 23:39:00.441343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:37.410 [2024-05-14 23:39:00.441387] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:23:37.410 [2024-05-14 23:39:00.441418] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:37.410 [2024-05-14 23:39:00.442917] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:37.410 [2024-05-14 23:39:00.442961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:37.410 [2024-05-14 23:39:00.443016] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:37.410 [2024-05-14 23:39:00.443080] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:37.410 pt1 00:23:37.410 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:37.410 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:37.410 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:37.410 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:37.410 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:37.411 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:37.411 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:37.411 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:37.411 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:37.411 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:37.411 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.411 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.669 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.669 "name": "raid_bdev1", 00:23:37.669 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:37.669 "strip_size_kb": 0, 00:23:37.669 "state": "configuring", 00:23:37.669 "raid_level": "raid1", 00:23:37.669 "superblock": true, 00:23:37.669 "num_base_bdevs": 2, 00:23:37.669 "num_base_bdevs_discovered": 1, 00:23:37.669 "num_base_bdevs_operational": 2, 00:23:37.669 "base_bdevs_list": [ 00:23:37.669 { 00:23:37.669 "name": "pt1", 00:23:37.669 "uuid": "200a00da-c59a-59f2-b589-c2fe40abaee2", 00:23:37.669 "is_configured": true, 00:23:37.669 "data_offset": 256, 00:23:37.669 "data_size": 7936 00:23:37.669 }, 00:23:37.669 { 00:23:37.669 "name": null, 00:23:37.669 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:37.669 "is_configured": false, 00:23:37.669 "data_offset": 256, 00:23:37.669 "data_size": 7936 00:23:37.669 } 00:23:37.669 ] 00:23:37.669 }' 00:23:37.669 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.669 23:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.235 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:38.235 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:38.235 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:38.235 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:38.493 [2024-05-14 23:39:01.669461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:38.493 [2024-05-14 23:39:01.669577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.493 [2024-05-14 23:39:01.669639] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002cd80 00:23:38.493 [2024-05-14 23:39:01.669674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.493 [2024-05-14 23:39:01.669839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.493 [2024-05-14 23:39:01.669883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:38.493 [2024-05-14 23:39:01.669942] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:38.493 [2024-05-14 23:39:01.669979] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:38.493 [2024-05-14 23:39:01.670069] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:23:38.493 [2024-05-14 23:39:01.670085] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:38.493 [2024-05-14 23:39:01.670349] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:38.493 [2024-05-14 23:39:01.670424] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:23:38.493 [2024-05-14 23:39:01.670439] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:23:38.493 [2024-05-14 23:39:01.670497] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.493 pt2 00:23:38.493 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:38.493 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:38.493 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:38.493 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:38.493 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.494 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.752 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.752 "name": "raid_bdev1", 00:23:38.752 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:38.752 "strip_size_kb": 0, 00:23:38.752 "state": "online", 00:23:38.752 "raid_level": "raid1", 00:23:38.752 "superblock": true, 00:23:38.752 "num_base_bdevs": 2, 00:23:38.752 "num_base_bdevs_discovered": 2, 00:23:38.752 "num_base_bdevs_operational": 2, 00:23:38.752 "base_bdevs_list": [ 00:23:38.752 { 00:23:38.752 "name": "pt1", 00:23:38.752 "uuid": "200a00da-c59a-59f2-b589-c2fe40abaee2", 00:23:38.752 "is_configured": true, 00:23:38.752 "data_offset": 256, 00:23:38.752 "data_size": 7936 00:23:38.752 }, 00:23:38.752 { 00:23:38.752 "name": "pt2", 00:23:38.752 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:38.752 "is_configured": true, 00:23:38.752 "data_offset": 256, 00:23:38.752 "data_size": 7936 00:23:38.752 } 00:23:38.752 ] 00:23:38.752 }' 00:23:38.752 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.752 23:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.329 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:39.329 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:23:39.329 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:39.329 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:39.329 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:39.329 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:23:39.329 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:39.329 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:39.588 [2024-05-14 23:39:02.709767] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.588 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:39.588 "name": "raid_bdev1", 00:23:39.588 "aliases": [ 00:23:39.588 "da61c02d-da84-4b1e-a6b1-21f1a4b4164d" 00:23:39.588 ], 00:23:39.588 "product_name": "Raid Volume", 00:23:39.588 "block_size": 4128, 00:23:39.588 "num_blocks": 7936, 00:23:39.588 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:39.588 "md_size": 32, 00:23:39.588 "md_interleave": true, 00:23:39.588 "dif_type": 0, 00:23:39.588 "assigned_rate_limits": { 00:23:39.588 "rw_ios_per_sec": 0, 00:23:39.588 "rw_mbytes_per_sec": 0, 00:23:39.588 "r_mbytes_per_sec": 0, 00:23:39.588 "w_mbytes_per_sec": 0 00:23:39.588 }, 00:23:39.588 "claimed": false, 00:23:39.588 "zoned": false, 00:23:39.588 "supported_io_types": { 00:23:39.588 "read": true, 00:23:39.588 "write": true, 00:23:39.588 "unmap": false, 00:23:39.588 "write_zeroes": true, 00:23:39.588 "flush": false, 00:23:39.588 "reset": true, 00:23:39.588 "compare": false, 00:23:39.588 "compare_and_write": false, 00:23:39.588 "abort": false, 00:23:39.588 "nvme_admin": false, 00:23:39.588 "nvme_io": false 00:23:39.588 }, 00:23:39.588 "memory_domains": [ 00:23:39.588 { 00:23:39.588 "dma_device_id": "system", 00:23:39.588 "dma_device_type": 1 00:23:39.588 }, 00:23:39.588 { 00:23:39.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.588 "dma_device_type": 2 00:23:39.588 }, 00:23:39.588 { 00:23:39.588 "dma_device_id": "system", 00:23:39.588 "dma_device_type": 1 00:23:39.588 }, 00:23:39.588 { 00:23:39.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.588 "dma_device_type": 2 00:23:39.588 } 00:23:39.588 ], 00:23:39.588 "driver_specific": { 00:23:39.588 "raid": { 00:23:39.588 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:39.588 "strip_size_kb": 0, 00:23:39.588 "state": "online", 00:23:39.588 "raid_level": "raid1", 00:23:39.588 "superblock": true, 00:23:39.588 "num_base_bdevs": 2, 00:23:39.588 "num_base_bdevs_discovered": 2, 00:23:39.588 "num_base_bdevs_operational": 2, 00:23:39.588 "base_bdevs_list": [ 00:23:39.588 { 00:23:39.588 "name": "pt1", 00:23:39.588 "uuid": "200a00da-c59a-59f2-b589-c2fe40abaee2", 00:23:39.588 "is_configured": true, 00:23:39.588 "data_offset": 256, 00:23:39.588 "data_size": 7936 00:23:39.588 }, 00:23:39.588 { 00:23:39.588 "name": "pt2", 00:23:39.588 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:39.588 "is_configured": true, 00:23:39.588 "data_offset": 256, 00:23:39.588 "data_size": 7936 00:23:39.588 } 00:23:39.588 ] 00:23:39.588 } 00:23:39.588 } 00:23:39.589 }' 00:23:39.589 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:39.589 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:23:39.589 pt2' 00:23:39.589 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:39.589 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:39.589 23:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:39.847 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:39.847 "name": "pt1", 00:23:39.847 "aliases": [ 00:23:39.847 "200a00da-c59a-59f2-b589-c2fe40abaee2" 00:23:39.847 ], 00:23:39.847 "product_name": "passthru", 00:23:39.847 "block_size": 4128, 00:23:39.847 "num_blocks": 8192, 00:23:39.847 "uuid": "200a00da-c59a-59f2-b589-c2fe40abaee2", 00:23:39.847 "md_size": 32, 00:23:39.847 "md_interleave": true, 00:23:39.847 "dif_type": 0, 00:23:39.847 "assigned_rate_limits": { 00:23:39.847 "rw_ios_per_sec": 0, 00:23:39.847 "rw_mbytes_per_sec": 0, 00:23:39.847 "r_mbytes_per_sec": 0, 00:23:39.847 "w_mbytes_per_sec": 0 00:23:39.847 }, 00:23:39.847 "claimed": true, 00:23:39.847 "claim_type": "exclusive_write", 00:23:39.847 "zoned": false, 00:23:39.847 "supported_io_types": { 00:23:39.847 "read": true, 00:23:39.847 "write": true, 00:23:39.847 "unmap": true, 00:23:39.847 "write_zeroes": true, 00:23:39.847 "flush": true, 00:23:39.847 "reset": true, 00:23:39.847 "compare": false, 00:23:39.847 "compare_and_write": false, 00:23:39.847 "abort": true, 00:23:39.847 "nvme_admin": false, 00:23:39.847 "nvme_io": false 00:23:39.847 }, 00:23:39.847 "memory_domains": [ 00:23:39.847 { 00:23:39.847 "dma_device_id": "system", 00:23:39.847 "dma_device_type": 1 00:23:39.847 }, 00:23:39.847 { 00:23:39.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.847 "dma_device_type": 2 00:23:39.847 } 00:23:39.847 ], 00:23:39.847 "driver_specific": { 00:23:39.847 "passthru": { 00:23:39.847 "name": "pt1", 00:23:39.847 "base_bdev_name": "malloc1" 00:23:39.847 } 00:23:39.847 } 00:23:39.847 }' 00:23:39.847 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:39.847 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:39.847 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:23:39.847 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:40.105 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:40.105 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:40.105 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:40.105 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:40.105 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:23:40.105 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:40.363 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:40.363 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:40.363 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:40.363 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:40.363 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:40.715 "name": "pt2", 00:23:40.715 "aliases": [ 00:23:40.715 "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196" 00:23:40.715 ], 00:23:40.715 "product_name": "passthru", 00:23:40.715 "block_size": 4128, 00:23:40.715 "num_blocks": 8192, 00:23:40.715 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:40.715 "md_size": 32, 00:23:40.715 "md_interleave": true, 00:23:40.715 "dif_type": 0, 00:23:40.715 "assigned_rate_limits": { 00:23:40.715 "rw_ios_per_sec": 0, 00:23:40.715 "rw_mbytes_per_sec": 0, 00:23:40.715 "r_mbytes_per_sec": 0, 00:23:40.715 "w_mbytes_per_sec": 0 00:23:40.715 }, 00:23:40.715 "claimed": true, 00:23:40.715 "claim_type": "exclusive_write", 00:23:40.715 "zoned": false, 00:23:40.715 "supported_io_types": { 00:23:40.715 "read": true, 00:23:40.715 "write": true, 00:23:40.715 "unmap": true, 00:23:40.715 "write_zeroes": true, 00:23:40.715 "flush": true, 00:23:40.715 "reset": true, 00:23:40.715 "compare": false, 00:23:40.715 "compare_and_write": false, 00:23:40.715 "abort": true, 00:23:40.715 "nvme_admin": false, 00:23:40.715 "nvme_io": false 00:23:40.715 }, 00:23:40.715 "memory_domains": [ 00:23:40.715 { 00:23:40.715 "dma_device_id": "system", 00:23:40.715 "dma_device_type": 1 00:23:40.715 }, 00:23:40.715 { 00:23:40.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.715 "dma_device_type": 2 00:23:40.715 } 00:23:40.715 ], 00:23:40.715 "driver_specific": { 00:23:40.715 "passthru": { 00:23:40.715 "name": "pt2", 00:23:40.715 "base_bdev_name": "malloc2" 00:23:40.715 } 00:23:40.715 } 00:23:40.715 }' 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:40.715 23:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:40.975 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:23:40.975 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:40.975 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:40.975 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:23:40.975 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:40.975 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:41.233 [2024-05-14 23:39:04.322156] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:41.233 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' da61c02d-da84-4b1e-a6b1-21f1a4b4164d '!=' da61c02d-da84-4b1e-a6b1-21f1a4b4164d ']' 00:23:41.233 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:41.233 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:23:41.233 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:23:41.233 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:41.493 [2024-05-14 23:39:04.577984] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.493 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.756 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:41.756 "name": "raid_bdev1", 00:23:41.756 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:41.756 "strip_size_kb": 0, 00:23:41.756 "state": "online", 00:23:41.756 "raid_level": "raid1", 00:23:41.756 "superblock": true, 00:23:41.756 "num_base_bdevs": 2, 00:23:41.756 "num_base_bdevs_discovered": 1, 00:23:41.756 "num_base_bdevs_operational": 1, 00:23:41.756 "base_bdevs_list": [ 00:23:41.756 { 00:23:41.756 "name": null, 00:23:41.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.756 "is_configured": false, 00:23:41.756 "data_offset": 256, 00:23:41.756 "data_size": 7936 00:23:41.756 }, 00:23:41.756 { 00:23:41.756 "name": "pt2", 00:23:41.756 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:41.756 "is_configured": true, 00:23:41.756 "data_offset": 256, 00:23:41.756 "data_size": 7936 00:23:41.756 } 00:23:41.756 ] 00:23:41.756 }' 00:23:41.756 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:41.756 23:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:42.323 23:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:42.581 [2024-05-14 23:39:05.738049] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:42.582 [2024-05-14 23:39:05.738090] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:42.582 [2024-05-14 23:39:05.738156] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:42.582 [2024-05-14 23:39:05.738445] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:42.582 [2024-05-14 23:39:05.738462] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:23:42.582 23:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.582 23:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:42.840 23:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:42.840 23:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:42.840 23:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:42.840 23:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:42.840 23:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:43.099 [2024-05-14 23:39:06.358105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:43.099 [2024-05-14 23:39:06.358375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.099 pt2 00:23:43.099 [2024-05-14 23:39:06.358451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:23:43.099 [2024-05-14 23:39:06.358489] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.099 [2024-05-14 23:39:06.360868] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.099 [2024-05-14 23:39:06.360955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:43.099 [2024-05-14 23:39:06.361048] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:43.099 [2024-05-14 23:39:06.361192] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:43.099 [2024-05-14 23:39:06.361313] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011c00 00:23:43.099 [2024-05-14 23:39:06.361336] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:43.099 [2024-05-14 23:39:06.361439] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:43.099 [2024-05-14 23:39:06.361534] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011c00 00:23:43.099 [2024-05-14 23:39:06.361559] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011c00 00:23:43.099 [2024-05-14 23:39:06.361634] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.099 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.358 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:43.358 "name": "raid_bdev1", 00:23:43.358 "uuid": "da61c02d-da84-4b1e-a6b1-21f1a4b4164d", 00:23:43.358 "strip_size_kb": 0, 00:23:43.358 "state": "online", 00:23:43.358 "raid_level": "raid1", 00:23:43.358 "superblock": true, 00:23:43.358 "num_base_bdevs": 2, 00:23:43.358 "num_base_bdevs_discovered": 1, 00:23:43.358 "num_base_bdevs_operational": 1, 00:23:43.358 "base_bdevs_list": [ 00:23:43.358 { 00:23:43.358 "name": null, 00:23:43.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.358 "is_configured": false, 00:23:43.358 "data_offset": 256, 00:23:43.358 "data_size": 7936 00:23:43.358 }, 00:23:43.358 { 00:23:43.358 "name": "pt2", 00:23:43.358 "uuid": "ef817ef7-d6a8-5814-b9f8-f7c4a2d49196", 00:23:43.358 "is_configured": true, 00:23:43.358 "data_offset": 256, 00:23:43.358 "data_size": 7936 00:23:43.358 } 00:23:43.358 ] 00:23:43.358 }' 00:23:43.358 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:43.358 23:39:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # '[' 2 -gt 2 ']' 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # jq -r '.[] | .uuid' 00:23:44.292 [2024-05-14 23:39:07.462433] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # '[' da61c02d-da84-4b1e-a6b1-21f1a4b4164d '!=' da61c02d-da84-4b1e-a6b1-21f1a4b4164d ']' 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@568 -- # killprocess 75008 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 75008 ']' 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 75008 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75008 00:23:44.292 killing process with pid 75008 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75008' 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 75008 00:23:44.292 23:39:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 75008 00:23:44.292 [2024-05-14 23:39:07.507541] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:44.292 [2024-05-14 23:39:07.507607] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:44.292 [2024-05-14 23:39:07.507643] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:44.292 [2024-05-14 23:39:07.507654] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state offline 00:23:44.550 [2024-05-14 23:39:07.672296] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:45.940 ************************************ 00:23:45.940 END TEST raid_superblock_test_md_interleaved 00:23:45.940 ************************************ 00:23:45.940 23:39:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # return 0 00:23:45.940 00:23:45.940 real 0m15.034s 00:23:45.940 user 0m27.550s 00:23:45.940 sys 0m1.541s 00:23:45.940 23:39:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:45.940 23:39:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.940 23:39:08 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:45.940 23:39:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:23:45.940 23:39:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:45.940 23:39:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:45.940 ************************************ 00:23:45.940 START TEST raid_rebuild_test_sb_md_interleaved 00:23:45.940 ************************************ 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_level=raid1 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local num_base_bdevs=2 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local superblock=true 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local background_io=false 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local verify=false 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i = 1 )) 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # echo BaseBdev1 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # echo BaseBdev2 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i++ )) 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # (( i <= num_base_bdevs )) 00:23:45.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local base_bdevs 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # local raid_bdev_name=raid_bdev1 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # local strip_size 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@582 -- # local create_arg 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@583 -- # local raid_bdev_size 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local data_offset 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # '[' raid1 '!=' raid1 ']' 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # strip_size=0 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # '[' true = true ']' 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # create_arg+=' -s' 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # raid_pid=75484 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # waitforlisten 75484 /var/tmp/spdk-raid.sock 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 75484 ']' 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.940 23:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:45.940 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:45.940 Zero copy mechanism will not be used. 00:23:45.940 [2024-05-14 23:39:09.060317] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:23:45.940 [2024-05-14 23:39:09.060520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75484 ] 00:23:45.940 [2024-05-14 23:39:09.218563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.199 [2024-05-14 23:39:09.430771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.457 [2024-05-14 23:39:09.625463] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:46.717 23:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:46.717 23:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:23:46.717 23:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:23:46.717 23:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:46.975 BaseBdev1_malloc 00:23:46.975 23:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:46.975 [2024-05-14 23:39:10.248574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:46.975 [2024-05-14 23:39:10.248688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.975 [2024-05-14 23:39:10.248745] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027380 00:23:46.975 [2024-05-14 23:39:10.248792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.975 BaseBdev1 00:23:46.975 [2024-05-14 23:39:10.250518] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.975 [2024-05-14 23:39:10.250556] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:47.233 23:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # for bdev in "${base_bdevs[@]}" 00:23:47.233 23:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:47.233 BaseBdev2_malloc 00:23:47.233 23:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:47.492 [2024-05-14 23:39:10.658396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:47.492 [2024-05-14 23:39:10.658482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.492 [2024-05-14 23:39:10.658534] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000029180 00:23:47.492 [2024-05-14 23:39:10.658578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.492 [2024-05-14 23:39:10.660232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.492 [2024-05-14 23:39:10.660281] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:47.492 BaseBdev2 00:23:47.492 23:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:47.751 spare_malloc 00:23:47.751 23:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:48.009 spare_delay 00:23:48.009 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@614 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:48.009 [2024-05-14 23:39:11.268454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:48.009 [2024-05-14 23:39:11.268546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.009 [2024-05-14 23:39:11.268595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b580 00:23:48.009 [2024-05-14 23:39:11.268640] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.009 [2024-05-14 23:39:11.270173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.009 [2024-05-14 23:39:11.270217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:48.009 spare 00:23:48.009 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:48.268 [2024-05-14 23:39:11.452561] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:48.268 [2024-05-14 23:39:11.454054] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:48.268 [2024-05-14 23:39:11.454242] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011180 00:23:48.268 [2024-05-14 23:39:11.454259] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:48.268 [2024-05-14 23:39:11.454340] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:48.268 [2024-05-14 23:39:11.454395] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011180 00:23:48.268 [2024-05-14 23:39:11.454406] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011180 00:23:48.268 [2024-05-14 23:39:11.454464] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.268 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.526 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:48.526 "name": "raid_bdev1", 00:23:48.527 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:48.527 "strip_size_kb": 0, 00:23:48.527 "state": "online", 00:23:48.527 "raid_level": "raid1", 00:23:48.527 "superblock": true, 00:23:48.527 "num_base_bdevs": 2, 00:23:48.527 "num_base_bdevs_discovered": 2, 00:23:48.527 "num_base_bdevs_operational": 2, 00:23:48.527 "base_bdevs_list": [ 00:23:48.527 { 00:23:48.527 "name": "BaseBdev1", 00:23:48.527 "uuid": "1c3f4784-78ed-5532-9c2b-9996922ed0ee", 00:23:48.527 "is_configured": true, 00:23:48.527 "data_offset": 256, 00:23:48.527 "data_size": 7936 00:23:48.527 }, 00:23:48.527 { 00:23:48.527 "name": "BaseBdev2", 00:23:48.527 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:48.527 "is_configured": true, 00:23:48.527 "data_offset": 256, 00:23:48.527 "data_size": 7936 00:23:48.527 } 00:23:48.527 ] 00:23:48.527 }' 00:23:48.527 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:48.527 23:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.462 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:49.462 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # jq -r '.[].num_blocks' 00:23:49.462 [2024-05-14 23:39:12.604855] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:49.462 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # raid_bdev_size=7936 00:23:49.462 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.462 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:49.721 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # data_offset=256 00:23:49.721 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@626 -- # '[' false = true ']' 00:23:49.721 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@629 -- # '[' false = true ']' 00:23:49.721 23:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:49.980 [2024-05-14 23:39:13.080744] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@648 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.980 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.239 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.239 "name": "raid_bdev1", 00:23:50.239 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:50.239 "strip_size_kb": 0, 00:23:50.239 "state": "online", 00:23:50.239 "raid_level": "raid1", 00:23:50.239 "superblock": true, 00:23:50.239 "num_base_bdevs": 2, 00:23:50.239 "num_base_bdevs_discovered": 1, 00:23:50.239 "num_base_bdevs_operational": 1, 00:23:50.239 "base_bdevs_list": [ 00:23:50.239 { 00:23:50.239 "name": null, 00:23:50.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.239 "is_configured": false, 00:23:50.239 "data_offset": 256, 00:23:50.239 "data_size": 7936 00:23:50.239 }, 00:23:50.239 { 00:23:50.239 "name": "BaseBdev2", 00:23:50.239 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:50.239 "is_configured": true, 00:23:50.239 "data_offset": 256, 00:23:50.239 "data_size": 7936 00:23:50.239 } 00:23:50.239 ] 00:23:50.239 }' 00:23:50.239 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.239 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.805 23:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:51.064 [2024-05-14 23:39:14.164862] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:51.064 [2024-05-14 23:39:14.179971] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:51.064 [2024-05-14 23:39:14.181516] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:51.064 23:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # sleep 1 00:23:52.000 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.000 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:52.000 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:52.000 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:52.000 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:52.000 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.000 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.259 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:52.259 "name": "raid_bdev1", 00:23:52.259 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:52.259 "strip_size_kb": 0, 00:23:52.259 "state": "online", 00:23:52.259 "raid_level": "raid1", 00:23:52.259 "superblock": true, 00:23:52.259 "num_base_bdevs": 2, 00:23:52.259 "num_base_bdevs_discovered": 2, 00:23:52.259 "num_base_bdevs_operational": 2, 00:23:52.259 "process": { 00:23:52.259 "type": "rebuild", 00:23:52.259 "target": "spare", 00:23:52.259 "progress": { 00:23:52.259 "blocks": 2816, 00:23:52.259 "percent": 35 00:23:52.259 } 00:23:52.259 }, 00:23:52.259 "base_bdevs_list": [ 00:23:52.259 { 00:23:52.259 "name": "spare", 00:23:52.259 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:23:52.259 "is_configured": true, 00:23:52.259 "data_offset": 256, 00:23:52.259 "data_size": 7936 00:23:52.259 }, 00:23:52.259 { 00:23:52.259 "name": "BaseBdev2", 00:23:52.259 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:52.259 "is_configured": true, 00:23:52.259 "data_offset": 256, 00:23:52.259 "data_size": 7936 00:23:52.259 } 00:23:52.259 ] 00:23:52.259 }' 00:23:52.259 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:52.259 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.259 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:52.259 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.259 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:52.517 [2024-05-14 23:39:15.747079] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:52.517 [2024-05-14 23:39:15.791080] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:52.517 [2024-05-14 23:39:15.791334] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.776 23:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.034 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:53.034 "name": "raid_bdev1", 00:23:53.034 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:53.034 "strip_size_kb": 0, 00:23:53.034 "state": "online", 00:23:53.034 "raid_level": "raid1", 00:23:53.034 "superblock": true, 00:23:53.034 "num_base_bdevs": 2, 00:23:53.034 "num_base_bdevs_discovered": 1, 00:23:53.034 "num_base_bdevs_operational": 1, 00:23:53.034 "base_bdevs_list": [ 00:23:53.034 { 00:23:53.034 "name": null, 00:23:53.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.034 "is_configured": false, 00:23:53.034 "data_offset": 256, 00:23:53.034 "data_size": 7936 00:23:53.034 }, 00:23:53.034 { 00:23:53.034 "name": "BaseBdev2", 00:23:53.034 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:53.034 "is_configured": true, 00:23:53.034 "data_offset": 256, 00:23:53.034 "data_size": 7936 00:23:53.034 } 00:23:53.034 ] 00:23:53.034 }' 00:23:53.034 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:53.034 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.602 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:53.602 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.602 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:53.602 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:53.602 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.602 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.602 23:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.861 23:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.861 "name": "raid_bdev1", 00:23:53.861 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:53.861 "strip_size_kb": 0, 00:23:53.861 "state": "online", 00:23:53.861 "raid_level": "raid1", 00:23:53.861 "superblock": true, 00:23:53.861 "num_base_bdevs": 2, 00:23:53.861 "num_base_bdevs_discovered": 1, 00:23:53.861 "num_base_bdevs_operational": 1, 00:23:53.861 "base_bdevs_list": [ 00:23:53.861 { 00:23:53.861 "name": null, 00:23:53.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.861 "is_configured": false, 00:23:53.861 "data_offset": 256, 00:23:53.861 "data_size": 7936 00:23:53.861 }, 00:23:53.861 { 00:23:53.861 "name": "BaseBdev2", 00:23:53.861 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:53.861 "is_configured": true, 00:23:53.861 "data_offset": 256, 00:23:53.861 "data_size": 7936 00:23:53.861 } 00:23:53.861 ] 00:23:53.861 }' 00:23:53.861 23:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.861 23:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:53.861 23:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.861 23:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:53.861 23:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@667 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:54.119 [2024-05-14 23:39:17.332145] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:54.119 [2024-05-14 23:39:17.346607] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:54.119 [2024-05-14 23:39:17.348096] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:54.119 23:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # sleep 1 00:23:55.556 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@669 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.556 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.556 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:55.556 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:55.556 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.556 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.556 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.556 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.556 "name": "raid_bdev1", 00:23:55.557 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:55.557 "strip_size_kb": 0, 00:23:55.557 "state": "online", 00:23:55.557 "raid_level": "raid1", 00:23:55.557 "superblock": true, 00:23:55.557 "num_base_bdevs": 2, 00:23:55.557 "num_base_bdevs_discovered": 2, 00:23:55.557 "num_base_bdevs_operational": 2, 00:23:55.557 "process": { 00:23:55.557 "type": "rebuild", 00:23:55.557 "target": "spare", 00:23:55.557 "progress": { 00:23:55.557 "blocks": 2816, 00:23:55.557 "percent": 35 00:23:55.557 } 00:23:55.557 }, 00:23:55.557 "base_bdevs_list": [ 00:23:55.557 { 00:23:55.557 "name": "spare", 00:23:55.557 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:23:55.557 "is_configured": true, 00:23:55.557 "data_offset": 256, 00:23:55.557 "data_size": 7936 00:23:55.557 }, 00:23:55.557 { 00:23:55.557 "name": "BaseBdev2", 00:23:55.557 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:55.557 "is_configured": true, 00:23:55.557 "data_offset": 256, 00:23:55.557 "data_size": 7936 00:23:55.557 } 00:23:55.557 ] 00:23:55.557 }' 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # '[' true = true ']' 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # '[' = false ']' 00:23:55.557 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 671: [: =: unary operator expected 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@696 -- # local num_base_bdevs_operational=2 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@698 -- # '[' raid1 = raid1 ']' 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@698 -- # '[' 2 -gt 2 ']' 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # local timeout=749 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.557 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.830 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.830 "name": "raid_bdev1", 00:23:55.830 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:55.830 "strip_size_kb": 0, 00:23:55.830 "state": "online", 00:23:55.830 "raid_level": "raid1", 00:23:55.830 "superblock": true, 00:23:55.830 "num_base_bdevs": 2, 00:23:55.830 "num_base_bdevs_discovered": 2, 00:23:55.830 "num_base_bdevs_operational": 2, 00:23:55.830 "process": { 00:23:55.830 "type": "rebuild", 00:23:55.830 "target": "spare", 00:23:55.830 "progress": { 00:23:55.830 "blocks": 3840, 00:23:55.830 "percent": 48 00:23:55.830 } 00:23:55.830 }, 00:23:55.830 "base_bdevs_list": [ 00:23:55.830 { 00:23:55.830 "name": "spare", 00:23:55.830 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:23:55.830 "is_configured": true, 00:23:55.830 "data_offset": 256, 00:23:55.830 "data_size": 7936 00:23:55.830 }, 00:23:55.830 { 00:23:55.830 "name": "BaseBdev2", 00:23:55.830 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:55.830 "is_configured": true, 00:23:55.830 "data_offset": 256, 00:23:55.830 "data_size": 7936 00:23:55.830 } 00:23:55.830 ] 00:23:55.830 }' 00:23:55.830 23:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.830 23:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.830 23:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.830 23:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.830 23:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # sleep 1 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.208 "name": "raid_bdev1", 00:23:57.208 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:57.208 "strip_size_kb": 0, 00:23:57.208 "state": "online", 00:23:57.208 "raid_level": "raid1", 00:23:57.208 "superblock": true, 00:23:57.208 "num_base_bdevs": 2, 00:23:57.208 "num_base_bdevs_discovered": 2, 00:23:57.208 "num_base_bdevs_operational": 2, 00:23:57.208 "process": { 00:23:57.208 "type": "rebuild", 00:23:57.208 "target": "spare", 00:23:57.208 "progress": { 00:23:57.208 "blocks": 7424, 00:23:57.208 "percent": 93 00:23:57.208 } 00:23:57.208 }, 00:23:57.208 "base_bdevs_list": [ 00:23:57.208 { 00:23:57.208 "name": "spare", 00:23:57.208 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:23:57.208 "is_configured": true, 00:23:57.208 "data_offset": 256, 00:23:57.208 "data_size": 7936 00:23:57.208 }, 00:23:57.208 { 00:23:57.208 "name": "BaseBdev2", 00:23:57.208 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:57.208 "is_configured": true, 00:23:57.208 "data_offset": 256, 00:23:57.208 "data_size": 7936 00:23:57.208 } 00:23:57.208 ] 00:23:57.208 }' 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.208 23:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # sleep 1 00:23:57.208 [2024-05-14 23:39:20.466625] bdev_raid.c:2741:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:57.208 [2024-05-14 23:39:20.466691] bdev_raid.c:2458:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:57.208 [2024-05-14 23:39:20.466813] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@712 -- # (( SECONDS < timeout )) 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@713 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:58.583 "name": "raid_bdev1", 00:23:58.583 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:58.583 "strip_size_kb": 0, 00:23:58.583 "state": "online", 00:23:58.583 "raid_level": "raid1", 00:23:58.583 "superblock": true, 00:23:58.583 "num_base_bdevs": 2, 00:23:58.583 "num_base_bdevs_discovered": 2, 00:23:58.583 "num_base_bdevs_operational": 2, 00:23:58.583 "base_bdevs_list": [ 00:23:58.583 { 00:23:58.583 "name": "spare", 00:23:58.583 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:23:58.583 "is_configured": true, 00:23:58.583 "data_offset": 256, 00:23:58.583 "data_size": 7936 00:23:58.583 }, 00:23:58.583 { 00:23:58.583 "name": "BaseBdev2", 00:23:58.583 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:58.583 "is_configured": true, 00:23:58.583 "data_offset": 256, 00:23:58.583 "data_size": 7936 00:23:58.583 } 00:23:58.583 ] 00:23:58.583 }' 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # break 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.583 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.842 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:58.842 "name": "raid_bdev1", 00:23:58.842 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:58.842 "strip_size_kb": 0, 00:23:58.842 "state": "online", 00:23:58.842 "raid_level": "raid1", 00:23:58.842 "superblock": true, 00:23:58.842 "num_base_bdevs": 2, 00:23:58.842 "num_base_bdevs_discovered": 2, 00:23:58.842 "num_base_bdevs_operational": 2, 00:23:58.842 "base_bdevs_list": [ 00:23:58.842 { 00:23:58.842 "name": "spare", 00:23:58.842 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:23:58.842 "is_configured": true, 00:23:58.842 "data_offset": 256, 00:23:58.842 "data_size": 7936 00:23:58.842 }, 00:23:58.842 { 00:23:58.842 "name": "BaseBdev2", 00:23:58.842 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:58.842 "is_configured": true, 00:23:58.842 "data_offset": 256, 00:23:58.842 "data_size": 7936 00:23:58.842 } 00:23:58.842 ] 00:23:58.842 }' 00:23:58.842 23:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.842 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.101 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:59.101 "name": "raid_bdev1", 00:23:59.101 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:23:59.101 "strip_size_kb": 0, 00:23:59.101 "state": "online", 00:23:59.101 "raid_level": "raid1", 00:23:59.101 "superblock": true, 00:23:59.101 "num_base_bdevs": 2, 00:23:59.101 "num_base_bdevs_discovered": 2, 00:23:59.101 "num_base_bdevs_operational": 2, 00:23:59.101 "base_bdevs_list": [ 00:23:59.101 { 00:23:59.101 "name": "spare", 00:23:59.101 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:23:59.101 "is_configured": true, 00:23:59.101 "data_offset": 256, 00:23:59.101 "data_size": 7936 00:23:59.101 }, 00:23:59.101 { 00:23:59.101 "name": "BaseBdev2", 00:23:59.101 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:23:59.101 "is_configured": true, 00:23:59.101 "data_offset": 256, 00:23:59.101 "data_size": 7936 00:23:59.101 } 00:23:59.101 ] 00:23:59.101 }' 00:23:59.101 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:59.101 23:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:00.037 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:00.037 [2024-05-14 23:39:23.267933] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.037 [2024-05-14 23:39:23.267970] bdev_raid.c:1845:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:00.037 [2024-05-14 23:39:23.268058] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:00.037 [2024-05-14 23:39:23.268106] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:00.037 [2024-05-14 23:39:23.268119] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state offline 00:24:00.037 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.037 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # jq length 00:24:00.296 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@725 -- # [[ 0 == 0 ]] 00:24:00.297 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@727 -- # '[' false = true ']' 00:24:00.297 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # '[' true = true ']' 00:24:00.297 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:24:00.297 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev1 ']' 00:24:00.297 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:00.555 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:00.813 [2024-05-14 23:39:23.960034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:00.813 [2024-05-14 23:39:23.960145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.813 [2024-05-14 23:39:23.960369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e280 00:24:00.813 [2024-05-14 23:39:23.960411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.813 BaseBdev1 00:24:00.813 [2024-05-14 23:39:23.961923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.813 [2024-05-14 23:39:23.961990] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:00.813 [2024-05-14 23:39:23.962048] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:00.813 [2024-05-14 23:39:23.962129] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:00.813 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # for bdev in "${base_bdevs[@]}" 00:24:00.813 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # '[' -z BaseBdev2 ']' 00:24:00.813 23:39:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:01.071 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:01.330 [2024-05-14 23:39:24.404102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:01.330 [2024-05-14 23:39:24.404359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.330 [2024-05-14 23:39:24.404434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:24:01.330 [2024-05-14 23:39:24.404466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.330 [2024-05-14 23:39:24.404630] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.330 [2024-05-14 23:39:24.404679] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:01.330 [2024-05-14 23:39:24.404742] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:01.330 [2024-05-14 23:39:24.404757] bdev_raid.c:3396:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:01.330 [2024-05-14 23:39:24.404766] bdev_raid.c:2310:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:01.330 [2024-05-14 23:39:24.404789] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state configuring 00:24:01.330 [2024-05-14 23:39:24.404867] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:01.330 BaseBdev2 00:24:01.330 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:01.588 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:01.848 [2024-05-14 23:39:24.908158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:01.848 [2024-05-14 23:39:24.908258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.848 [2024-05-14 23:39:24.908310] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031280 00:24:01.848 [2024-05-14 23:39:24.908334] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.848 [2024-05-14 23:39:24.908492] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.848 [2024-05-14 23:39:24.908538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:01.848 [2024-05-14 23:39:24.908600] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:01.848 [2024-05-14 23:39:24.908627] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.848 spare 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.848 23:39:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.848 [2024-05-14 23:39:25.008710] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:24:01.848 [2024-05-14 23:39:25.008745] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:01.848 [2024-05-14 23:39:25.008868] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:01.848 [2024-05-14 23:39:25.008956] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:24:01.848 [2024-05-14 23:39:25.008969] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:24:01.848 [2024-05-14 23:39:25.009023] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.107 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:02.107 "name": "raid_bdev1", 00:24:02.107 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:02.107 "strip_size_kb": 0, 00:24:02.107 "state": "online", 00:24:02.107 "raid_level": "raid1", 00:24:02.107 "superblock": true, 00:24:02.107 "num_base_bdevs": 2, 00:24:02.107 "num_base_bdevs_discovered": 2, 00:24:02.107 "num_base_bdevs_operational": 2, 00:24:02.107 "base_bdevs_list": [ 00:24:02.107 { 00:24:02.107 "name": "spare", 00:24:02.107 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:24:02.107 "is_configured": true, 00:24:02.107 "data_offset": 256, 00:24:02.107 "data_size": 7936 00:24:02.107 }, 00:24:02.107 { 00:24:02.107 "name": "BaseBdev2", 00:24:02.107 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:02.107 "is_configured": true, 00:24:02.107 "data_offset": 256, 00:24:02.107 "data_size": 7936 00:24:02.107 } 00:24:02.107 ] 00:24:02.107 }' 00:24:02.107 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:02.107 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.674 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:02.674 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.674 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:02.674 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:02.674 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.674 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.674 23:39:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.933 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:02.933 "name": "raid_bdev1", 00:24:02.933 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:02.933 "strip_size_kb": 0, 00:24:02.933 "state": "online", 00:24:02.933 "raid_level": "raid1", 00:24:02.933 "superblock": true, 00:24:02.933 "num_base_bdevs": 2, 00:24:02.933 "num_base_bdevs_discovered": 2, 00:24:02.933 "num_base_bdevs_operational": 2, 00:24:02.933 "base_bdevs_list": [ 00:24:02.933 { 00:24:02.933 "name": "spare", 00:24:02.933 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:24:02.933 "is_configured": true, 00:24:02.933 "data_offset": 256, 00:24:02.933 "data_size": 7936 00:24:02.933 }, 00:24:02.933 { 00:24:02.933 "name": "BaseBdev2", 00:24:02.933 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:02.933 "is_configured": true, 00:24:02.933 "data_offset": 256, 00:24:02.933 "data_size": 7936 00:24:02.933 } 00:24:02.933 ] 00:24:02.933 }' 00:24:02.933 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.933 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:02.933 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.933 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:02.933 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.933 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:03.192 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.192 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:03.451 [2024-05-14 23:39:26.601784] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.451 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.710 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.710 "name": "raid_bdev1", 00:24:03.710 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:03.710 "strip_size_kb": 0, 00:24:03.710 "state": "online", 00:24:03.710 "raid_level": "raid1", 00:24:03.710 "superblock": true, 00:24:03.710 "num_base_bdevs": 2, 00:24:03.710 "num_base_bdevs_discovered": 1, 00:24:03.710 "num_base_bdevs_operational": 1, 00:24:03.710 "base_bdevs_list": [ 00:24:03.710 { 00:24:03.710 "name": null, 00:24:03.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.710 "is_configured": false, 00:24:03.710 "data_offset": 256, 00:24:03.710 "data_size": 7936 00:24:03.710 }, 00:24:03.710 { 00:24:03.710 "name": "BaseBdev2", 00:24:03.710 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:03.710 "is_configured": true, 00:24:03.710 "data_offset": 256, 00:24:03.710 "data_size": 7936 00:24:03.710 } 00:24:03.710 ] 00:24:03.710 }' 00:24:03.710 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.710 23:39:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.278 23:39:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:04.537 [2024-05-14 23:39:27.744796] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.537 [2024-05-14 23:39:27.744957] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:04.537 [2024-05-14 23:39:27.744974] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:04.537 [2024-05-14 23:39:27.745045] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.537 [2024-05-14 23:39:27.759232] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:04.537 [2024-05-14 23:39:27.760783] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:04.537 23:39:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # sleep 1 00:24:05.911 23:39:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.911 23:39:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.911 23:39:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.911 23:39:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.911 23:39:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.911 23:39:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.911 23:39:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.911 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.911 "name": "raid_bdev1", 00:24:05.911 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:05.911 "strip_size_kb": 0, 00:24:05.911 "state": "online", 00:24:05.911 "raid_level": "raid1", 00:24:05.911 "superblock": true, 00:24:05.911 "num_base_bdevs": 2, 00:24:05.911 "num_base_bdevs_discovered": 2, 00:24:05.911 "num_base_bdevs_operational": 2, 00:24:05.911 "process": { 00:24:05.911 "type": "rebuild", 00:24:05.911 "target": "spare", 00:24:05.911 "progress": { 00:24:05.911 "blocks": 3072, 00:24:05.911 "percent": 38 00:24:05.911 } 00:24:05.911 }, 00:24:05.911 "base_bdevs_list": [ 00:24:05.911 { 00:24:05.911 "name": "spare", 00:24:05.911 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:24:05.911 "is_configured": true, 00:24:05.911 "data_offset": 256, 00:24:05.911 "data_size": 7936 00:24:05.911 }, 00:24:05.911 { 00:24:05.911 "name": "BaseBdev2", 00:24:05.911 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:05.911 "is_configured": true, 00:24:05.911 "data_offset": 256, 00:24:05.911 "data_size": 7936 00:24:05.911 } 00:24:05.911 ] 00:24:05.911 }' 00:24:05.911 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.911 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.911 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.911 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.911 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:06.169 [2024-05-14 23:39:29.374909] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:06.427 [2024-05-14 23:39:29.470669] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:06.427 [2024-05-14 23:39:29.470757] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.427 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.686 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:06.686 "name": "raid_bdev1", 00:24:06.686 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:06.686 "strip_size_kb": 0, 00:24:06.686 "state": "online", 00:24:06.686 "raid_level": "raid1", 00:24:06.686 "superblock": true, 00:24:06.686 "num_base_bdevs": 2, 00:24:06.686 "num_base_bdevs_discovered": 1, 00:24:06.686 "num_base_bdevs_operational": 1, 00:24:06.686 "base_bdevs_list": [ 00:24:06.686 { 00:24:06.686 "name": null, 00:24:06.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.686 "is_configured": false, 00:24:06.686 "data_offset": 256, 00:24:06.686 "data_size": 7936 00:24:06.686 }, 00:24:06.686 { 00:24:06.686 "name": "BaseBdev2", 00:24:06.686 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:06.686 "is_configured": true, 00:24:06.686 "data_offset": 256, 00:24:06.686 "data_size": 7936 00:24:06.686 } 00:24:06.686 ] 00:24:06.686 }' 00:24:06.686 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:06.686 23:39:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.251 23:39:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:07.509 [2024-05-14 23:39:30.679891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:07.509 [2024-05-14 23:39:30.680015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.509 [2024-05-14 23:39:30.680088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033380 00:24:07.509 [2024-05-14 23:39:30.680112] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.509 [2024-05-14 23:39:30.680554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.509 [2024-05-14 23:39:30.680595] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:07.509 [2024-05-14 23:39:30.680659] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:07.509 [2024-05-14 23:39:30.680675] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:07.509 [2024-05-14 23:39:30.680685] bdev_raid.c:3452:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:07.509 [2024-05-14 23:39:30.680717] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.509 [2024-05-14 23:39:30.694910] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:24:07.509 spare 00:24:07.509 [2024-05-14 23:39:30.696509] bdev_raid.c:2776:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:07.509 23:39:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:08.442 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.442 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.442 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.442 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.442 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.442 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.442 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.701 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.701 "name": "raid_bdev1", 00:24:08.701 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:08.701 "strip_size_kb": 0, 00:24:08.701 "state": "online", 00:24:08.701 "raid_level": "raid1", 00:24:08.701 "superblock": true, 00:24:08.701 "num_base_bdevs": 2, 00:24:08.701 "num_base_bdevs_discovered": 2, 00:24:08.701 "num_base_bdevs_operational": 2, 00:24:08.701 "process": { 00:24:08.701 "type": "rebuild", 00:24:08.701 "target": "spare", 00:24:08.701 "progress": { 00:24:08.701 "blocks": 2816, 00:24:08.701 "percent": 35 00:24:08.701 } 00:24:08.701 }, 00:24:08.701 "base_bdevs_list": [ 00:24:08.701 { 00:24:08.701 "name": "spare", 00:24:08.701 "uuid": "6dec1610-4c5f-5f1b-a2c9-439eca15efd0", 00:24:08.701 "is_configured": true, 00:24:08.701 "data_offset": 256, 00:24:08.701 "data_size": 7936 00:24:08.701 }, 00:24:08.701 { 00:24:08.701 "name": "BaseBdev2", 00:24:08.701 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:08.701 "is_configured": true, 00:24:08.701 "data_offset": 256, 00:24:08.701 "data_size": 7936 00:24:08.701 } 00:24:08.701 ] 00:24:08.701 }' 00:24:08.701 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.701 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.701 23:39:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.959 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.959 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:08.959 [2024-05-14 23:39:32.194271] bdev_raid.c:2111:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:08.959 [2024-05-14 23:39:32.211901] bdev_raid.c:2467:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:08.959 [2024-05-14 23:39:32.211999] bdev_raid.c: 315:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:09.218 "name": "raid_bdev1", 00:24:09.218 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:09.218 "strip_size_kb": 0, 00:24:09.218 "state": "online", 00:24:09.218 "raid_level": "raid1", 00:24:09.218 "superblock": true, 00:24:09.218 "num_base_bdevs": 2, 00:24:09.218 "num_base_bdevs_discovered": 1, 00:24:09.218 "num_base_bdevs_operational": 1, 00:24:09.218 "base_bdevs_list": [ 00:24:09.218 { 00:24:09.218 "name": null, 00:24:09.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.218 "is_configured": false, 00:24:09.218 "data_offset": 256, 00:24:09.218 "data_size": 7936 00:24:09.218 }, 00:24:09.218 { 00:24:09.218 "name": "BaseBdev2", 00:24:09.218 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:09.218 "is_configured": true, 00:24:09.218 "data_offset": 256, 00:24:09.218 "data_size": 7936 00:24:09.218 } 00:24:09.218 ] 00:24:09.218 }' 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:09.218 23:39:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:10.166 "name": "raid_bdev1", 00:24:10.166 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:10.166 "strip_size_kb": 0, 00:24:10.166 "state": "online", 00:24:10.166 "raid_level": "raid1", 00:24:10.166 "superblock": true, 00:24:10.166 "num_base_bdevs": 2, 00:24:10.166 "num_base_bdevs_discovered": 1, 00:24:10.166 "num_base_bdevs_operational": 1, 00:24:10.166 "base_bdevs_list": [ 00:24:10.166 { 00:24:10.166 "name": null, 00:24:10.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.166 "is_configured": false, 00:24:10.166 "data_offset": 256, 00:24:10.166 "data_size": 7936 00:24:10.166 }, 00:24:10.166 { 00:24:10.166 "name": "BaseBdev2", 00:24:10.166 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:10.166 "is_configured": true, 00:24:10.166 "data_offset": 256, 00:24:10.166 "data_size": 7936 00:24:10.166 } 00:24:10.166 ] 00:24:10.166 }' 00:24:10.166 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:10.424 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:10.424 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:10.424 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:10.424 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:10.683 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@785 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:10.683 [2024-05-14 23:39:33.961840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:10.683 [2024-05-14 23:39:33.961962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.683 [2024-05-14 23:39:33.962010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035480 00:24:10.683 [2024-05-14 23:39:33.962041] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.683 [2024-05-14 23:39:33.962496] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.683 [2024-05-14 23:39:33.962562] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:10.683 [2024-05-14 23:39:33.962620] bdev_raid.c:3528:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:10.683 [2024-05-14 23:39:33.962635] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:10.683 [2024-05-14 23:39:33.962643] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:10.941 BaseBdev1 00:24:10.941 23:39:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # sleep 1 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.878 23:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.136 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.136 "name": "raid_bdev1", 00:24:12.136 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:12.136 "strip_size_kb": 0, 00:24:12.136 "state": "online", 00:24:12.136 "raid_level": "raid1", 00:24:12.136 "superblock": true, 00:24:12.136 "num_base_bdevs": 2, 00:24:12.136 "num_base_bdevs_discovered": 1, 00:24:12.136 "num_base_bdevs_operational": 1, 00:24:12.136 "base_bdevs_list": [ 00:24:12.136 { 00:24:12.136 "name": null, 00:24:12.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.136 "is_configured": false, 00:24:12.136 "data_offset": 256, 00:24:12.136 "data_size": 7936 00:24:12.136 }, 00:24:12.136 { 00:24:12.136 "name": "BaseBdev2", 00:24:12.136 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:12.136 "is_configured": true, 00:24:12.136 "data_offset": 256, 00:24:12.136 "data_size": 7936 00:24:12.136 } 00:24:12.136 ] 00:24:12.136 }' 00:24:12.136 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.136 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.702 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:12.702 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:12.702 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:12.702 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:12.702 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:12.702 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.702 23:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.960 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:12.960 "name": "raid_bdev1", 00:24:12.960 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:12.960 "strip_size_kb": 0, 00:24:12.960 "state": "online", 00:24:12.960 "raid_level": "raid1", 00:24:12.960 "superblock": true, 00:24:12.960 "num_base_bdevs": 2, 00:24:12.960 "num_base_bdevs_discovered": 1, 00:24:12.960 "num_base_bdevs_operational": 1, 00:24:12.960 "base_bdevs_list": [ 00:24:12.960 { 00:24:12.960 "name": null, 00:24:12.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.960 "is_configured": false, 00:24:12.960 "data_offset": 256, 00:24:12.960 "data_size": 7936 00:24:12.960 }, 00:24:12.960 { 00:24:12.960 "name": "BaseBdev2", 00:24:12.960 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:12.960 "is_configured": true, 00:24:12.960 "data_offset": 256, 00:24:12.960 "data_size": 7936 00:24:12.960 } 00:24:12.960 ] 00:24:12.960 }' 00:24:12.960 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:12.960 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:12.960 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:13.219 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:13.219 [2024-05-14 23:39:36.493768] bdev_raid.c:3122:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.219 [2024-05-14 23:39:36.493902] bdev_raid.c:3411:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:13.219 [2024-05-14 23:39:36.493916] bdev_raid.c:3430:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:13.219 request: 00:24:13.219 { 00:24:13.219 "raid_bdev": "raid_bdev1", 00:24:13.219 "base_bdev": "BaseBdev1", 00:24:13.219 "method": "bdev_raid_add_base_bdev", 00:24:13.219 "req_id": 1 00:24:13.219 } 00:24:13.219 Got JSON-RPC error response 00:24:13.219 response: 00:24:13.219 { 00:24:13.219 "code": -22, 00:24:13.219 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:13.219 } 00:24:13.498 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:24:13.498 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:13.498 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:13.498 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:13.498 23:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # sleep 1 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.434 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.692 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.692 "name": "raid_bdev1", 00:24:14.692 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:14.692 "strip_size_kb": 0, 00:24:14.692 "state": "online", 00:24:14.692 "raid_level": "raid1", 00:24:14.692 "superblock": true, 00:24:14.692 "num_base_bdevs": 2, 00:24:14.692 "num_base_bdevs_discovered": 1, 00:24:14.692 "num_base_bdevs_operational": 1, 00:24:14.692 "base_bdevs_list": [ 00:24:14.692 { 00:24:14.692 "name": null, 00:24:14.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.692 "is_configured": false, 00:24:14.692 "data_offset": 256, 00:24:14.692 "data_size": 7936 00:24:14.692 }, 00:24:14.692 { 00:24:14.692 "name": "BaseBdev2", 00:24:14.692 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:14.692 "is_configured": true, 00:24:14.692 "data_offset": 256, 00:24:14.692 "data_size": 7936 00:24:14.692 } 00:24:14.692 ] 00:24:14.692 }' 00:24:14.692 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.692 23:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:15.259 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:15.259 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:15.259 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:15.259 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:15.259 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:15.259 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.259 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:15.520 "name": "raid_bdev1", 00:24:15.520 "uuid": "a2a249cf-0684-481b-98f3-35e14e12ddbd", 00:24:15.520 "strip_size_kb": 0, 00:24:15.520 "state": "online", 00:24:15.520 "raid_level": "raid1", 00:24:15.520 "superblock": true, 00:24:15.520 "num_base_bdevs": 2, 00:24:15.520 "num_base_bdevs_discovered": 1, 00:24:15.520 "num_base_bdevs_operational": 1, 00:24:15.520 "base_bdevs_list": [ 00:24:15.520 { 00:24:15.520 "name": null, 00:24:15.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.520 "is_configured": false, 00:24:15.520 "data_offset": 256, 00:24:15.520 "data_size": 7936 00:24:15.520 }, 00:24:15.520 { 00:24:15.520 "name": "BaseBdev2", 00:24:15.520 "uuid": "4a361ebe-5b2c-5737-9900-9cc19e3f320e", 00:24:15.520 "is_configured": true, 00:24:15.520 "data_offset": 256, 00:24:15.520 "data_size": 7936 00:24:15.520 } 00:24:15.520 ] 00:24:15.520 }' 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # killprocess 75484 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 75484 ']' 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 75484 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75484 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:15.520 killing process with pid 75484 00:24:15.520 Received shutdown signal, test time was about 60.000000 seconds 00:24:15.520 00:24:15.520 Latency(us) 00:24:15.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.520 =================================================================================================================== 00:24:15.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75484' 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 75484 00:24:15.520 23:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 75484 00:24:15.520 [2024-05-14 23:39:38.792596] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:15.520 [2024-05-14 23:39:38.792714] bdev_raid.c: 448:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:15.520 [2024-05-14 23:39:38.792754] bdev_raid.c: 425:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:15.520 [2024-05-14 23:39:38.792766] bdev_raid.c: 350:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:24:15.779 [2024-05-14 23:39:39.032184] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:17.155 ************************************ 00:24:17.155 END TEST raid_rebuild_test_sb_md_interleaved 00:24:17.155 ************************************ 00:24:17.155 23:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@797 -- # return 0 00:24:17.155 00:24:17.155 real 0m31.309s 00:24:17.155 user 0m51.410s 00:24:17.155 sys 0m2.351s 00:24:17.155 23:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:17.155 23:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:17.155 23:39:40 bdev_raid -- bdev/bdev_raid.sh@862 -- # rm -f /raidrandtest 00:24:17.155 ************************************ 00:24:17.155 END TEST bdev_raid 00:24:17.155 ************************************ 00:24:17.155 00:24:17.155 real 12m21.127s 00:24:17.155 user 22m41.950s 00:24:17.155 sys 1m14.788s 00:24:17.155 23:39:40 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:17.155 23:39:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:17.155 23:39:40 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:17.155 23:39:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:17.155 23:39:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:17.155 23:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.155 ************************************ 00:24:17.155 START TEST bdevperf_config 00:24:17.155 ************************************ 00:24:17.155 23:39:40 bdevperf_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:17.155 * Looking for test storage... 00:24:17.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:24:17.155 23:39:40 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:24:17.155 23:39:40 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:24:17.156 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:17.156 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:17.156 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:17.156 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:17.156 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:17.156 23:39:40 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:22.423 23:39:44 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-05-14 23:39:40.553492] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:22.423 [2024-05-14 23:39:40.553677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76369 ] 00:24:22.423 Using job config with 4 jobs 00:24:22.423 [2024-05-14 23:39:40.704801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.423 [2024-05-14 23:39:40.917831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.423 cpumask for '\''job0'\'' is too big 00:24:22.423 cpumask for '\''job1'\'' is too big 00:24:22.423 cpumask for '\''job2'\'' is too big 00:24:22.423 cpumask for '\''job3'\'' is too big 00:24:22.423 Running I/O for 2 seconds... 00:24:22.423 00:24:22.423 Latency(us) 00:24:22.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 74020.31 72.29 0.00 0.00 3456.63 685.15 5600.35 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 74005.55 72.27 0.00 0.00 3454.47 737.28 4915.20 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 73991.24 72.26 0.00 0.00 3452.40 748.45 4170.47 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 73976.68 72.24 0.00 0.00 3450.19 673.98 3738.53 00:24:22.423 =================================================================================================================== 00:24:22.423 Total : 295993.78 289.06 0.00 0.00 3453.42 673.98 5600.35' 00:24:22.423 23:39:44 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-05-14 23:39:40.553492] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:22.423 [2024-05-14 23:39:40.553677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76369 ] 00:24:22.423 Using job config with 4 jobs 00:24:22.423 [2024-05-14 23:39:40.704801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.423 [2024-05-14 23:39:40.917831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.423 cpumask for '\''job0'\'' is too big 00:24:22.423 cpumask for '\''job1'\'' is too big 00:24:22.423 cpumask for '\''job2'\'' is too big 00:24:22.423 cpumask for '\''job3'\'' is too big 00:24:22.423 Running I/O for 2 seconds... 00:24:22.423 00:24:22.423 Latency(us) 00:24:22.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 74020.31 72.29 0.00 0.00 3456.63 685.15 5600.35 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 74005.55 72.27 0.00 0.00 3454.47 737.28 4915.20 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 73991.24 72.26 0.00 0.00 3452.40 748.45 4170.47 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 73976.68 72.24 0.00 0.00 3450.19 673.98 3738.53 00:24:22.423 =================================================================================================================== 00:24:22.423 Total : 295993.78 289.06 0.00 0.00 3453.42 673.98 5600.35' 00:24:22.423 23:39:44 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-14 23:39:40.553492] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:22.423 [2024-05-14 23:39:40.553677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76369 ] 00:24:22.423 Using job config with 4 jobs 00:24:22.423 [2024-05-14 23:39:40.704801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.423 [2024-05-14 23:39:40.917831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.423 cpumask for '\''job0'\'' is too big 00:24:22.423 cpumask for '\''job1'\'' is too big 00:24:22.423 cpumask for '\''job2'\'' is too big 00:24:22.423 cpumask for '\''job3'\'' is too big 00:24:22.423 Running I/O for 2 seconds... 00:24:22.423 00:24:22.423 Latency(us) 00:24:22.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 74020.31 72.29 0.00 0.00 3456.63 685.15 5600.35 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 74005.55 72.27 0.00 0.00 3454.47 737.28 4915.20 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 73991.24 72.26 0.00 0.00 3452.40 748.45 4170.47 00:24:22.423 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:22.423 Malloc0 : 2.01 73976.68 72.24 0.00 0.00 3450.19 673.98 3738.53 00:24:22.423 =================================================================================================================== 00:24:22.423 Total : 295993.78 289.06 0.00 0.00 3453.42 673.98 5600.35' 00:24:22.423 23:39:44 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:22.423 23:39:44 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:22.423 23:39:44 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:24:22.423 23:39:44 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:22.423 [2024-05-14 23:39:45.003741] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:22.423 [2024-05-14 23:39:45.003927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76427 ] 00:24:22.423 [2024-05-14 23:39:45.156983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.424 [2024-05-14 23:39:45.402870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.682 cpumask for 'job0' is too big 00:24:22.682 cpumask for 'job1' is too big 00:24:22.682 cpumask for 'job2' is too big 00:24:22.682 cpumask for 'job3' is too big 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:24:26.871 Running I/O for 2 seconds... 00:24:26.871 00:24:26.871 Latency(us) 00:24:26.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.871 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:26.871 Malloc0 : 2.00 73046.82 71.33 0.00 0.00 3502.39 722.39 6374.87 00:24:26.871 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:26.871 Malloc0 : 2.01 73033.05 71.32 0.00 0.00 3499.96 666.53 5510.98 00:24:26.871 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:26.871 Malloc0 : 2.01 73086.08 71.37 0.00 0.00 3494.77 647.91 5064.15 00:24:26.871 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:26.871 Malloc0 : 2.01 73071.23 71.36 0.00 0.00 3492.11 700.04 5093.93 00:24:26.871 =================================================================================================================== 00:24:26.871 Total : 292237.18 285.39 0.00 0.00 3497.30 647.91 6374.87' 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:26.871 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:26.871 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:26.871 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:26.871 23:39:49 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:31.060 23:39:53 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-05-14 23:39:49.570491] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:31.060 [2024-05-14 23:39:49.570688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76490 ] 00:24:31.060 Using job config with 3 jobs 00:24:31.060 [2024-05-14 23:39:49.720586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.060 [2024-05-14 23:39:49.933391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.060 cpumask for '\''job0'\'' is too big 00:24:31.060 cpumask for '\''job1'\'' is too big 00:24:31.060 cpumask for '\''job2'\'' is too big 00:24:31.060 Running I/O for 2 seconds... 00:24:31.060 00:24:31.060 Latency(us) 00:24:31.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.060 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.060 Malloc0 : 2.00 97385.00 95.10 0.00 0.00 2626.64 711.21 4051.32 00:24:31.060 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.060 Malloc0 : 2.01 97400.14 95.12 0.00 0.00 2624.29 692.60 3544.90 00:24:31.060 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.060 Malloc0 : 2.01 97378.24 95.10 0.00 0.00 2622.73 673.98 3544.90 00:24:31.060 =================================================================================================================== 00:24:31.060 Total : 292163.39 285.32 0.00 0.00 2624.55 673.98 4051.32' 00:24:31.060 23:39:53 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-05-14 23:39:49.570491] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:31.060 [2024-05-14 23:39:49.570688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76490 ] 00:24:31.060 Using job config with 3 jobs 00:24:31.060 [2024-05-14 23:39:49.720586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.060 [2024-05-14 23:39:49.933391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.060 cpumask for '\''job0'\'' is too big 00:24:31.060 cpumask for '\''job1'\'' is too big 00:24:31.060 cpumask for '\''job2'\'' is too big 00:24:31.060 Running I/O for 2 seconds... 00:24:31.060 00:24:31.060 Latency(us) 00:24:31.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.060 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.060 Malloc0 : 2.00 97385.00 95.10 0.00 0.00 2626.64 711.21 4051.32 00:24:31.060 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.060 Malloc0 : 2.01 97400.14 95.12 0.00 0.00 2624.29 692.60 3544.90 00:24:31.060 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.060 Malloc0 : 2.01 97378.24 95.10 0.00 0.00 2622.73 673.98 3544.90 00:24:31.060 =================================================================================================================== 00:24:31.060 Total : 292163.39 285.32 0.00 0.00 2624.55 673.98 4051.32' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-14 23:39:49.570491] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:31.061 [2024-05-14 23:39:49.570688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76490 ] 00:24:31.061 Using job config with 3 jobs 00:24:31.061 [2024-05-14 23:39:49.720586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.061 [2024-05-14 23:39:49.933391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.061 cpumask for '\''job0'\'' is too big 00:24:31.061 cpumask for '\''job1'\'' is too big 00:24:31.061 cpumask for '\''job2'\'' is too big 00:24:31.061 Running I/O for 2 seconds... 00:24:31.061 00:24:31.061 Latency(us) 00:24:31.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.061 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.061 Malloc0 : 2.00 97385.00 95.10 0.00 0.00 2626.64 711.21 4051.32 00:24:31.061 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.061 Malloc0 : 2.01 97400.14 95.12 0.00 0.00 2624.29 692.60 3544.90 00:24:31.061 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:31.061 Malloc0 : 2.01 97378.24 95.10 0.00 0.00 2622.73 673.98 3544.90 00:24:31.061 =================================================================================================================== 00:24:31.061 Total : 292163.39 285.32 0.00 0.00 2624.55 673.98 4051.32' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:24:31.061 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:31.061 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:31.061 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:31.061 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:31.061 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:31.061 23:39:53 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:35.253 23:39:58 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-05-14 23:39:54.095577] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:35.253 [2024-05-14 23:39:54.095770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76551 ] 00:24:35.253 Using job config with 4 jobs 00:24:35.253 [2024-05-14 23:39:54.248497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.253 [2024-05-14 23:39:54.514773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.253 cpumask for '\''job0'\'' is too big 00:24:35.253 cpumask for '\''job1'\'' is too big 00:24:35.253 cpumask for '\''job2'\'' is too big 00:24:35.253 cpumask for '\''job3'\'' is too big 00:24:35.253 Running I/O for 2 seconds... 00:24:35.253 00:24:35.253 Latency(us) 00:24:35.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.253 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc0 : 2.01 36451.24 35.60 0.00 0.00 7019.46 1541.59 11915.64 00:24:35.253 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc1 : 2.01 36442.86 35.59 0.00 0.00 7018.38 1765.00 12094.37 00:24:35.253 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc0 : 2.01 36468.61 35.61 0.00 0.00 7004.48 1459.67 10783.65 00:24:35.253 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc1 : 2.02 36460.52 35.61 0.00 0.00 7003.20 1608.61 10902.81 00:24:35.253 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc0 : 2.02 36453.56 35.60 0.00 0.00 6996.15 1549.03 9472.93 00:24:35.253 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc1 : 2.02 36445.47 35.59 0.00 0.00 6994.90 1690.53 9532.51 00:24:35.253 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc0 : 2.02 36438.50 35.58 0.00 0.00 6987.13 1519.24 8757.99 00:24:35.253 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc1 : 2.02 36430.44 35.58 0.00 0.00 6985.52 1630.95 8877.15 00:24:35.253 =================================================================================================================== 00:24:35.253 Total : 291591.21 284.76 0.00 0.00 7001.14 1459.67 12094.37' 00:24:35.253 23:39:58 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-05-14 23:39:54.095577] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:35.253 [2024-05-14 23:39:54.095770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76551 ] 00:24:35.253 Using job config with 4 jobs 00:24:35.253 [2024-05-14 23:39:54.248497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.253 [2024-05-14 23:39:54.514773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.253 cpumask for '\''job0'\'' is too big 00:24:35.253 cpumask for '\''job1'\'' is too big 00:24:35.253 cpumask for '\''job2'\'' is too big 00:24:35.253 cpumask for '\''job3'\'' is too big 00:24:35.253 Running I/O for 2 seconds... 00:24:35.253 00:24:35.253 Latency(us) 00:24:35.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.253 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc0 : 2.01 36451.24 35.60 0.00 0.00 7019.46 1541.59 11915.64 00:24:35.253 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc1 : 2.01 36442.86 35.59 0.00 0.00 7018.38 1765.00 12094.37 00:24:35.253 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc0 : 2.01 36468.61 35.61 0.00 0.00 7004.48 1459.67 10783.65 00:24:35.253 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc1 : 2.02 36460.52 35.61 0.00 0.00 7003.20 1608.61 10902.81 00:24:35.253 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc0 : 2.02 36453.56 35.60 0.00 0.00 6996.15 1549.03 9472.93 00:24:35.253 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc1 : 2.02 36445.47 35.59 0.00 0.00 6994.90 1690.53 9532.51 00:24:35.253 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc0 : 2.02 36438.50 35.58 0.00 0.00 6987.13 1519.24 8757.99 00:24:35.253 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.253 Malloc1 : 2.02 36430.44 35.58 0.00 0.00 6985.52 1630.95 8877.15 00:24:35.253 =================================================================================================================== 00:24:35.254 Total : 291591.21 284.76 0.00 0.00 7001.14 1459.67 12094.37' 00:24:35.254 23:39:58 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-14 23:39:54.095577] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:35.254 [2024-05-14 23:39:54.095770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76551 ] 00:24:35.254 Using job config with 4 jobs 00:24:35.254 [2024-05-14 23:39:54.248497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.254 [2024-05-14 23:39:54.514773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.254 cpumask for '\''job0'\'' is too big 00:24:35.254 cpumask for '\''job1'\'' is too big 00:24:35.254 cpumask for '\''job2'\'' is too big 00:24:35.254 cpumask for '\''job3'\'' is too big 00:24:35.254 Running I/O for 2 seconds... 00:24:35.254 00:24:35.254 Latency(us) 00:24:35.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.254 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.254 Malloc0 : 2.01 36451.24 35.60 0.00 0.00 7019.46 1541.59 11915.64 00:24:35.254 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.254 Malloc1 : 2.01 36442.86 35.59 0.00 0.00 7018.38 1765.00 12094.37 00:24:35.254 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.254 Malloc0 : 2.01 36468.61 35.61 0.00 0.00 7004.48 1459.67 10783.65 00:24:35.254 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.254 Malloc1 : 2.02 36460.52 35.61 0.00 0.00 7003.20 1608.61 10902.81 00:24:35.254 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.254 Malloc0 : 2.02 36453.56 35.60 0.00 0.00 6996.15 1549.03 9472.93 00:24:35.254 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.254 Malloc1 : 2.02 36445.47 35.59 0.00 0.00 6994.90 1690.53 9532.51 00:24:35.254 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.254 Malloc0 : 2.02 36438.50 35.58 0.00 0.00 6987.13 1519.24 8757.99 00:24:35.254 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:35.254 Malloc1 : 2.02 36430.44 35.58 0.00 0.00 6985.52 1630.95 8877.15 00:24:35.254 =================================================================================================================== 00:24:35.254 Total : 291591.21 284.76 0.00 0.00 7001.14 1459.67 12094.37' 00:24:35.254 23:39:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:35.254 23:39:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:35.254 23:39:58 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:24:35.254 23:39:58 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:24:35.254 23:39:58 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:35.254 23:39:58 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:35.254 ************************************ 00:24:35.254 END TEST bdevperf_config 00:24:35.254 ************************************ 00:24:35.254 00:24:35.254 real 0m18.161s 00:24:35.254 user 0m16.200s 00:24:35.254 sys 0m1.139s 00:24:35.254 23:39:58 bdevperf_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:35.254 23:39:58 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:24:35.254 23:39:58 -- spdk/autotest.sh@188 -- # uname -s 00:24:35.254 23:39:58 -- spdk/autotest.sh@188 -- # [[ Linux == Linux ]] 00:24:35.254 23:39:58 -- spdk/autotest.sh@189 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:35.254 23:39:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:35.254 23:39:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:35.254 23:39:58 -- common/autotest_common.sh@10 -- # set +x 00:24:35.254 ************************************ 00:24:35.254 START TEST reactor_set_interrupt 00:24:35.254 ************************************ 00:24:35.254 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:35.514 * Looking for test storage... 00:24:35.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:35.514 23:39:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:35.514 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:35.514 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:35.514 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:35.515 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:35.515 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:35.515 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:35.515 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:35.515 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:24:35.515 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:35.515 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:35.515 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:35.515 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:35.515 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_PGO_DIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TESTS=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_APPS=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_ISAL_CRYPTO=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_LIBDIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_DAOS_DIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_ISCSI_INITIATOR=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_ASAN=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_LTO=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_CET=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_FUZZER=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_USDT=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_VTUNE=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_VHOST=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_WPDK_DIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_UBLK=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_URING=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_SMA=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_IDXD_KERNEL=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FC_PATH= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_PREFIX=/usr/local 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_XNVME=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_RDMA_PROV=verbs 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_RDMA_SET_TOS=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_FUZZER_LIB= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_ARCH=native 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_PGO_CAPTURE=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_DAOS=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_WERROR=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_DEBUG=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_AVAHI=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_CROSS_PREFIX= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_PGO_USE=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_CRYPTO=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_HAVE_ARC4RANDOM=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_OPENSSL_PATH= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_EXAMPLES=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_DPDK_INC_DIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_EVP_MAC=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_MAX_LCORES= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_VIRTIO=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_IPSEC_MB=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_UBSAN=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_HAVE_LIBBSD=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_URING_PATH= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_NVME_CUSE=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_URING_ZNS=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_VFIO_USER=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RBD=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_RAID5F=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_VFIO_USER_DIR= 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_TSAN=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IDXD=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_DPDK_UADK=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_OCF=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_FIO_PLUGIN=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_COVERAGE=y 00:24:35.515 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:35.515 23:39:58 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:35.516 23:39:58 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:35.516 23:39:58 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:35.516 23:39:58 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:35.516 23:39:58 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:35.516 23:39:58 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:35.516 #define SPDK_CONFIG_H 00:24:35.516 #define SPDK_CONFIG_APPS 1 00:24:35.516 #define SPDK_CONFIG_ARCH native 00:24:35.516 #define SPDK_CONFIG_ASAN 1 00:24:35.516 #undef SPDK_CONFIG_AVAHI 00:24:35.516 #undef SPDK_CONFIG_CET 00:24:35.516 #define SPDK_CONFIG_COVERAGE 1 00:24:35.516 #define SPDK_CONFIG_CROSS_PREFIX 00:24:35.516 #undef SPDK_CONFIG_CRYPTO 00:24:35.516 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:35.516 #undef SPDK_CONFIG_CUSTOMOCF 00:24:35.516 #define SPDK_CONFIG_DAOS 1 00:24:35.516 #define SPDK_CONFIG_DAOS_DIR 00:24:35.516 #define SPDK_CONFIG_DEBUG 1 00:24:35.516 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:35.516 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:24:35.516 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:35.516 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:35.516 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:35.516 #undef SPDK_CONFIG_DPDK_UADK 00:24:35.516 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:35.516 #define SPDK_CONFIG_EXAMPLES 1 00:24:35.516 #undef SPDK_CONFIG_FC 00:24:35.516 #define SPDK_CONFIG_FC_PATH 00:24:35.516 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:35.516 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:35.516 #undef SPDK_CONFIG_FUSE 00:24:35.516 #undef SPDK_CONFIG_FUZZER 00:24:35.516 #define SPDK_CONFIG_FUZZER_LIB 00:24:35.516 #undef SPDK_CONFIG_GOLANG 00:24:35.516 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:24:35.516 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:24:35.516 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:35.516 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:24:35.516 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:35.516 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:35.516 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:24:35.516 #define SPDK_CONFIG_IDXD 1 00:24:35.516 #undef SPDK_CONFIG_IDXD_KERNEL 00:24:35.516 #undef SPDK_CONFIG_IPSEC_MB 00:24:35.516 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:35.516 #undef SPDK_CONFIG_ISAL 00:24:35.516 #undef SPDK_CONFIG_ISAL_CRYPTO 00:24:35.516 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:24:35.516 #define SPDK_CONFIG_LIBDIR 00:24:35.516 #undef SPDK_CONFIG_LTO 00:24:35.516 #define SPDK_CONFIG_MAX_LCORES 00:24:35.516 #define SPDK_CONFIG_NVME_CUSE 1 00:24:35.516 #undef SPDK_CONFIG_OCF 00:24:35.516 #define SPDK_CONFIG_OCF_PATH 00:24:35.516 #define SPDK_CONFIG_OPENSSL_PATH 00:24:35.516 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:35.516 #define SPDK_CONFIG_PGO_DIR 00:24:35.516 #undef SPDK_CONFIG_PGO_USE 00:24:35.516 #define SPDK_CONFIG_PREFIX /usr/local 00:24:35.516 #undef SPDK_CONFIG_RAID5F 00:24:35.516 #undef SPDK_CONFIG_RBD 00:24:35.516 #define SPDK_CONFIG_RDMA 1 00:24:35.516 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:35.516 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:35.516 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:24:35.516 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:35.516 #undef SPDK_CONFIG_SHARED 00:24:35.516 #undef SPDK_CONFIG_SMA 00:24:35.516 #define SPDK_CONFIG_TESTS 1 00:24:35.516 #undef SPDK_CONFIG_TSAN 00:24:35.516 #undef SPDK_CONFIG_UBLK 00:24:35.516 #undef SPDK_CONFIG_UBSAN 00:24:35.516 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:35.516 #undef SPDK_CONFIG_URING 00:24:35.516 #define SPDK_CONFIG_URING_PATH 00:24:35.516 #undef SPDK_CONFIG_URING_ZNS 00:24:35.516 #undef SPDK_CONFIG_USDT 00:24:35.516 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:35.516 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:35.516 #undef SPDK_CONFIG_VFIO_USER 00:24:35.516 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:35.516 #define SPDK_CONFIG_VHOST 1 00:24:35.516 #define SPDK_CONFIG_VIRTIO 1 00:24:35.516 #undef SPDK_CONFIG_VTUNE 00:24:35.516 #define SPDK_CONFIG_VTUNE_DIR 00:24:35.516 #define SPDK_CONFIG_WERROR 1 00:24:35.516 #define SPDK_CONFIG_WPDK_DIR 00:24:35.516 #undef SPDK_CONFIG_XNVME 00:24:35.516 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:35.516 23:39:58 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:35.516 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:35.516 23:39:58 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.516 23:39:58 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.516 23:39:58 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.516 23:39:58 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:35.516 23:39:58 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:35.516 23:39:58 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:35.516 23:39:58 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:24:35.516 23:39:58 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:35.516 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:35.516 23:39:58 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:35.517 23:39:58 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:24:35.517 23:39:58 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@57 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@61 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@63 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@65 -- # : 1 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@67 -- # : 1 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@69 -- # : 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@71 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@73 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@75 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@77 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@79 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@81 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@83 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@85 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@87 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@89 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@91 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@93 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@95 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@97 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@99 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@101 -- # : rdma 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@103 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@105 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@107 -- # : 1 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@109 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@111 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@113 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@115 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@117 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@119 -- # : 1 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@121 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@123 -- # : 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@125 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@127 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@129 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@131 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@133 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@135 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@137 -- # : 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@139 -- # : true 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@141 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@143 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@145 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@147 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@149 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@151 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@153 -- # : 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@155 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@157 -- # : 1 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@159 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@161 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@163 -- # : 0 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:24:35.517 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@168 -- # : 0 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@170 -- # : 0 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@199 -- # cat 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export QEMU_BIN= 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@252 -- # QEMU_BIN= 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export valgrind= 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@262 -- # valgrind= 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@268 -- # uname -s 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@278 -- # MAKE=make 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@298 -- # TEST_MODE= 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@317 -- # [[ -z 76662 ]] 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@317 -- # kill -0 76662 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local mount target_dir 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.U3ty6U 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.U3ty6U/tests/interrupt /tmp/spdk.U3ty6U 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@326 -- # df -T 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6267637760 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267637760 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6293479424 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6277242880 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=20946944 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6298189824 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:35.518 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda1 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=xfs 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=14334173184 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=21463302144 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=7129128960 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=1259638784 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=1259638784 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=92385394688 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=7317385216 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:24:35.519 * Looking for test storage... 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@367 -- # local target_space new_size 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@371 -- # mount=/ 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@373 -- # target_space=14334173184 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ xfs == tmpfs ]] 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ xfs == ramfs ]] 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@380 -- # new_size=9343721472 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:35.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@388 -- # return 0 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set -o errtrace 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # true 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # xtrace_fd 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:35.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=76706 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 76706 /var/tmp/spdk.sock 00:24:35.519 23:39:58 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 76706 ']' 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:35.519 23:39:58 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:35.778 [2024-05-14 23:39:58.835986] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:35.778 [2024-05-14 23:39:58.836190] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76706 ] 00:24:35.778 [2024-05-14 23:39:58.986372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:36.036 [2024-05-14 23:39:59.194887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.036 [2024-05-14 23:39:59.194953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.036 [2024-05-14 23:39:59.194949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.295 [2024-05-14 23:39:59.485556] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:36.554 23:39:59 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.554 23:39:59 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:24:36.554 23:39:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:24:36.554 23:39:59 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:36.813 Malloc0 00:24:36.813 Malloc1 00:24:36.813 Malloc2 00:24:36.813 23:39:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:24:36.813 23:39:59 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:24:36.813 23:39:59 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:36.813 23:39:59 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:36.813 5000+0 records in 00:24:36.813 5000+0 records out 00:24:36.813 10240000 bytes (10 MB) copied, 0.0197845 s, 518 MB/s 00:24:36.813 23:39:59 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:37.102 AIO0 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 76706 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 76706 without_thd 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=76706 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:37.102 23:40:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:37.361 spdk_thread ids are 1 on reactor0. 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 76706 0 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76706 0 idle 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76706 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76706 -w 256 00:24:37.361 23:40:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76706 root 20 0 20.1t 124440 13304 S 0.0 1.0 0:00.70 reactor_0' 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76706 root 20 0 20.1t 124440 13304 S 0.0 1.0 0:00.70 reactor_0 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 76706 1 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76706 1 idle 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76706 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76706 -w 256 00:24:37.619 23:40:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:37.878 23:40:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76710 root 20 0 20.1t 124440 13304 S 0.0 1.0 0:00.00 reactor_1' 00:24:37.878 23:40:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76710 root 20 0 20.1t 124440 13304 S 0.0 1.0 0:00.00 reactor_1 00:24:37.878 23:40:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:37.878 23:40:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:37.878 23:40:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:37.878 23:40:00 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 76706 2 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76706 2 idle 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76706 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76706 -w 256 00:24:37.879 23:40:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76711 root 20 0 20.1t 124440 13304 S 0.0 1.0 0:00.00 reactor_2' 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76711 root 20 0 20.1t 124440 13304 S 0.0 1.0 0:00.00 reactor_2 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:24:37.879 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:24:38.137 [2024-05-14 23:40:01.337249] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:38.137 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:38.396 [2024-05-14 23:40:01.529063] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:38.396 [2024-05-14 23:40:01.530136] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:38.396 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:38.654 [2024-05-14 23:40:01.725003] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:38.654 [2024-05-14 23:40:01.726664] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 76706 0 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 76706 0 busy 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76706 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76706 -w 256 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76706 root 20 0 20.1t 124576 13304 R 99.9 1.0 0:01.07 reactor_0' 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76706 root 20 0 20.1t 124576 13304 R 99.9 1.0 0:01.07 reactor_0 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 76706 2 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 76706 2 busy 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76706 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76706 -w 256 00:24:38.654 23:40:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76711 root 20 0 20.1t 124576 13304 R 99.9 1.0 0:00.33 reactor_2' 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76711 root 20 0 20.1t 124576 13304 R 99.9 1.0 0:00.33 reactor_2 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:38.912 23:40:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:39.169 [2024-05-14 23:40:02.305024] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:39.169 [2024-05-14 23:40:02.305368] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 76706 2 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76706 2 idle 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76706 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:39.169 23:40:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76706 -w 256 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76711 root 20 0 20.1t 124640 13304 S 0.0 1.0 0:00.58 reactor_2' 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76711 root 20 0 20.1t 124640 13304 S 0.0 1.0 0:00.58 reactor_2 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:39.427 [2024-05-14 23:40:02.668970] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:39.427 [2024-05-14 23:40:02.669773] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:24:39.427 23:40:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:24:39.686 [2024-05-14 23:40:02.853181] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 76706 0 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76706 0 idle 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76706 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:39.686 23:40:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76706 -w 256 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76706 root 20 0 20.1t 124724 13304 S 6.7 1.0 0:01.85 reactor_0' 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76706 root 20 0 20.1t 124724 13304 S 6.7 1.0 0:01.85 reactor_0 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:24:39.945 23:40:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 76706 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 76706 ']' 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 76706 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76706 00:24:39.945 killing process with pid 76706 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76706' 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 76706 00:24:39.945 23:40:03 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 76706 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=76854 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 76854 /var/tmp/spdk.sock 00:24:41.358 23:40:04 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:41.358 23:40:04 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 76854 ']' 00:24:41.358 23:40:04 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.358 23:40:04 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:41.358 23:40:04 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.358 23:40:04 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:41.358 23:40:04 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:41.358 [2024-05-14 23:40:04.547191] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:41.358 [2024-05-14 23:40:04.547370] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76854 ] 00:24:41.616 [2024-05-14 23:40:04.700890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:41.873 [2024-05-14 23:40:04.910911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.873 [2024-05-14 23:40:04.910989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.873 [2024-05-14 23:40:04.910984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.131 [2024-05-14 23:40:05.206131] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:42.131 23:40:05 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:42.131 23:40:05 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:24:42.131 23:40:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:24:42.131 23:40:05 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:42.388 Malloc0 00:24:42.389 Malloc1 00:24:42.389 Malloc2 00:24:42.389 23:40:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:24:42.389 23:40:05 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:24:42.389 23:40:05 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:42.389 23:40:05 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:42.646 5000+0 records in 00:24:42.646 5000+0 records out 00:24:42.646 10240000 bytes (10 MB) copied, 0.0216157 s, 474 MB/s 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:42.646 AIO0 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 76854 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 76854 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=76854 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:42.646 23:40:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:42.903 23:40:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:42.903 23:40:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:42.903 23:40:06 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:43.161 spdk_thread ids are 1 on reactor0. 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 76854 0 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76854 0 idle 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76854 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76854 -w 256 00:24:43.161 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76854 root 20 0 20.1t 120684 13304 R 6.7 1.0 0:00.72 reactor_0' 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76854 root 20 0 20.1t 120684 13304 R 6.7 1.0 0:00.72 reactor_0 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:24:43.418 23:40:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 76854 1 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76854 1 idle 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76854 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76854 -w 256 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:43.419 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76866 root 20 0 20.1t 120684 13304 S 0.0 1.0 0:00.00 reactor_1' 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76866 root 20 0 20.1t 120684 13304 S 0.0 1.0 0:00.00 reactor_1 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 76854 2 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76854 2 idle 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76854 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76854 -w 256 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76867 root 20 0 20.1t 120684 13304 S 0.0 1.0 0:00.00 reactor_2' 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76867 root 20 0 20.1t 120684 13304 S 0.0 1.0 0:00.00 reactor_2 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:24:43.677 23:40:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:43.935 [2024-05-14 23:40:07.101228] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:43.935 [2024-05-14 23:40:07.101466] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:24:43.935 [2024-05-14 23:40:07.101692] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:43.935 23:40:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:44.193 [2024-05-14 23:40:07.341024] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:44.193 [2024-05-14 23:40:07.341374] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 76854 0 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 76854 0 busy 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76854 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76854 -w 256 00:24:44.193 23:40:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76854 root 20 0 20.1t 120728 13304 R 99.9 1.0 0:01.15 reactor_0' 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76854 root 20 0 20.1t 120728 13304 R 99.9 1.0 0:01.15 reactor_0 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 76854 2 00:24:44.452 23:40:07 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 76854 2 busy 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76854 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76854 -w 256 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76867 root 20 0 20.1t 120728 13304 R 93.8 1.0 0:00.34 reactor_2' 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76867 root 20 0 20.1t 120728 13304 R 93.8 1.0 0:00.34 reactor_2 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.8 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:44.453 23:40:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:44.711 [2024-05-14 23:40:07.893167] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:44.711 [2024-05-14 23:40:07.893783] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 76854 2 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76854 2 idle 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76854 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:44.711 23:40:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76854 -w 256 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76867 root 20 0 20.1t 120808 13308 S 0.0 1.0 0:00.55 reactor_2' 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76867 root 20 0 20.1t 120808 13308 S 0.0 1.0 0:00.55 reactor_2 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:44.971 23:40:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:45.231 [2024-05-14 23:40:08.293196] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:45.231 [2024-05-14 23:40:08.293496] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:24:45.231 [2024-05-14 23:40:08.293536] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 76854 0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 76854 0 idle 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=76854 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 76854 -w 256 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 76854 root 20 0 20.1t 120872 13308 S 0.0 1.0 0:01.92 reactor_0' 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 76854 root 20 0 20.1t 120872 13308 S 0.0 1.0 0:01.92 reactor_0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:45.231 23:40:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 76854 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 76854 ']' 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 76854 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76854 00:24:45.231 killing process with pid 76854 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76854' 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 76854 00:24:45.231 23:40:08 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 76854 00:24:46.607 23:40:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:46.607 23:40:09 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:46.607 ************************************ 00:24:46.607 END TEST reactor_set_interrupt 00:24:46.607 ************************************ 00:24:46.607 00:24:46.607 real 0m11.285s 00:24:46.607 user 0m11.784s 00:24:46.607 sys 0m1.377s 00:24:46.607 23:40:09 reactor_set_interrupt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:46.607 23:40:09 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:46.607 23:40:09 -- spdk/autotest.sh@190 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:46.607 23:40:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:46.607 23:40:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:46.607 23:40:09 -- common/autotest_common.sh@10 -- # set +x 00:24:46.607 ************************************ 00:24:46.607 START TEST reap_unregistered_poller 00:24:46.607 ************************************ 00:24:46.607 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:46.868 * Looking for test storage... 00:24:46.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:46.868 23:40:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:46.868 23:40:09 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:46.868 23:40:09 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:46.868 23:40:09 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:46.868 23:40:09 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:46.868 23:40:09 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:46.868 23:40:09 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:46.868 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:46.868 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:24:46.868 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:46.868 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:46.868 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:46.868 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:46.868 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_PGO_DIR= 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TESTS=y 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_APPS=y 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_ISAL_CRYPTO=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_LIBDIR= 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_DAOS_DIR= 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_ISCSI_INITIATOR=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_ASAN=y 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_LTO=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_CET=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_FUZZER=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_USDT=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_VTUNE=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_VHOST=y 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_WPDK_DIR= 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_UBLK=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_URING=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_SMA=n 00:24:46.868 23:40:09 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_IDXD_KERNEL=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FC_PATH= 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_PREFIX=/usr/local 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_XNVME=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_RDMA_PROV=verbs 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_RDMA_SET_TOS=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_FUZZER_LIB= 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_ARCH=native 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_PGO_CAPTURE=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_DAOS=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_WERROR=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_DEBUG=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_AVAHI=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_CROSS_PREFIX= 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_PGO_USE=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_CRYPTO=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_HAVE_ARC4RANDOM=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_OPENSSL_PATH= 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_EXAMPLES=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_DPDK_INC_DIR= 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_EVP_MAC=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_MAX_LCORES= 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_VIRTIO=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_IPSEC_MB=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_UBSAN=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_HAVE_LIBBSD=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_URING_PATH= 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_NVME_CUSE=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_URING_ZNS=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_VFIO_USER=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RBD=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_RAID5F=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_VFIO_USER_DIR= 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_TSAN=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IDXD=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_DPDK_UADK=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_OCF=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_FIO_PLUGIN=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_COVERAGE=y 00:24:46.869 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:46.869 23:40:09 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:46.869 #define SPDK_CONFIG_H 00:24:46.869 #define SPDK_CONFIG_APPS 1 00:24:46.869 #define SPDK_CONFIG_ARCH native 00:24:46.869 #define SPDK_CONFIG_ASAN 1 00:24:46.869 #undef SPDK_CONFIG_AVAHI 00:24:46.869 #undef SPDK_CONFIG_CET 00:24:46.869 #define SPDK_CONFIG_COVERAGE 1 00:24:46.869 #define SPDK_CONFIG_CROSS_PREFIX 00:24:46.869 #undef SPDK_CONFIG_CRYPTO 00:24:46.869 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:46.869 #undef SPDK_CONFIG_CUSTOMOCF 00:24:46.869 #define SPDK_CONFIG_DAOS 1 00:24:46.869 #define SPDK_CONFIG_DAOS_DIR 00:24:46.869 #define SPDK_CONFIG_DEBUG 1 00:24:46.869 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:46.869 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:24:46.869 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:46.869 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:46.869 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:46.869 #undef SPDK_CONFIG_DPDK_UADK 00:24:46.869 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:46.869 #define SPDK_CONFIG_EXAMPLES 1 00:24:46.869 #undef SPDK_CONFIG_FC 00:24:46.869 #define SPDK_CONFIG_FC_PATH 00:24:46.869 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:46.869 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:46.869 #undef SPDK_CONFIG_FUSE 00:24:46.869 #undef SPDK_CONFIG_FUZZER 00:24:46.869 #define SPDK_CONFIG_FUZZER_LIB 00:24:46.869 #undef SPDK_CONFIG_GOLANG 00:24:46.869 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:24:46.869 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:24:46.869 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:46.869 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:24:46.869 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:46.869 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:46.869 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:24:46.869 #define SPDK_CONFIG_IDXD 1 00:24:46.869 #undef SPDK_CONFIG_IDXD_KERNEL 00:24:46.869 #undef SPDK_CONFIG_IPSEC_MB 00:24:46.869 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:46.869 #undef SPDK_CONFIG_ISAL 00:24:46.869 #undef SPDK_CONFIG_ISAL_CRYPTO 00:24:46.869 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:24:46.869 #define SPDK_CONFIG_LIBDIR 00:24:46.869 #undef SPDK_CONFIG_LTO 00:24:46.869 #define SPDK_CONFIG_MAX_LCORES 00:24:46.869 #define SPDK_CONFIG_NVME_CUSE 1 00:24:46.869 #undef SPDK_CONFIG_OCF 00:24:46.869 #define SPDK_CONFIG_OCF_PATH 00:24:46.869 #define SPDK_CONFIG_OPENSSL_PATH 00:24:46.869 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:46.869 #define SPDK_CONFIG_PGO_DIR 00:24:46.869 #undef SPDK_CONFIG_PGO_USE 00:24:46.869 #define SPDK_CONFIG_PREFIX /usr/local 00:24:46.869 #undef SPDK_CONFIG_RAID5F 00:24:46.869 #undef SPDK_CONFIG_RBD 00:24:46.869 #define SPDK_CONFIG_RDMA 1 00:24:46.869 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:46.869 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:46.869 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:24:46.869 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:46.869 #undef SPDK_CONFIG_SHARED 00:24:46.869 #undef SPDK_CONFIG_SMA 00:24:46.870 #define SPDK_CONFIG_TESTS 1 00:24:46.870 #undef SPDK_CONFIG_TSAN 00:24:46.870 #undef SPDK_CONFIG_UBLK 00:24:46.870 #undef SPDK_CONFIG_UBSAN 00:24:46.870 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:46.870 #undef SPDK_CONFIG_URING 00:24:46.870 #define SPDK_CONFIG_URING_PATH 00:24:46.870 #undef SPDK_CONFIG_URING_ZNS 00:24:46.870 #undef SPDK_CONFIG_USDT 00:24:46.870 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:46.870 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:46.870 #undef SPDK_CONFIG_VFIO_USER 00:24:46.870 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:46.870 #define SPDK_CONFIG_VHOST 1 00:24:46.870 #define SPDK_CONFIG_VIRTIO 1 00:24:46.870 #undef SPDK_CONFIG_VTUNE 00:24:46.870 #define SPDK_CONFIG_VTUNE_DIR 00:24:46.870 #define SPDK_CONFIG_WERROR 1 00:24:46.870 #define SPDK_CONFIG_WPDK_DIR 00:24:46.870 #undef SPDK_CONFIG_XNVME 00:24:46.870 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:46.870 23:40:09 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:46.870 23:40:09 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.870 23:40:09 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.870 23:40:09 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.870 23:40:09 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:46.870 23:40:09 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:46.870 23:40:09 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:46.870 23:40:09 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:24:46.870 23:40:09 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:24:46.870 23:40:09 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@57 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@61 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@63 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@65 -- # : 1 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@67 -- # : 1 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@69 -- # : 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@71 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@73 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@75 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@77 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@79 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@81 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@83 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@85 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@87 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@89 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@91 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@93 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@95 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@97 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@99 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@101 -- # : rdma 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@103 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@105 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@107 -- # : 1 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@109 -- # : 0 00:24:46.870 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@111 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@113 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@115 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@117 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@119 -- # : 1 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@121 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@123 -- # : 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@125 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@127 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@129 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@131 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@133 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@135 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@137 -- # : 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@139 -- # : true 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@141 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@143 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@145 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@147 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@149 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@151 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@153 -- # : 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@155 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@157 -- # : 1 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@159 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@161 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@163 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@168 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@170 -- # : 0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@199 -- # cat 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export QEMU_BIN= 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@252 -- # QEMU_BIN= 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:46.871 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export valgrind= 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@262 -- # valgrind= 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@268 -- # uname -s 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@278 -- # MAKE=make 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@298 -- # TEST_MODE= 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@317 -- # [[ -z 77041 ]] 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@317 -- # kill -0 77041 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local mount target_dir 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.y2ZO2p 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:24:46.872 23:40:09 reap_unregistered_poller -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.y2ZO2p/tests/interrupt /tmp/spdk.y2ZO2p 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@326 -- # df -T 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6267637760 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267637760 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6293479424 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6277242880 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=20946944 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6298189824 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6298189824 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda1 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=xfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=14334148608 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=21463302144 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=7129153536 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=1259638784 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=1259638784 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=92384907264 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=7317872640 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:24:46.872 * Looking for test storage... 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@367 -- # local target_space new_size 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:46.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@371 -- # mount=/ 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@373 -- # target_space=14334148608 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ xfs == tmpfs ]] 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ xfs == ramfs ]] 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@380 -- # new_size=9343746048 00:24:46.872 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:46.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@388 -- # return 0 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set -o errtrace 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # true 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # xtrace_fd 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=77085 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 77085 /var/tmp/spdk.sock 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@827 -- # '[' -z 77085 ']' 00:24:46.873 23:40:10 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:46.873 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:47.132 [2024-05-14 23:40:10.159851] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:47.132 [2024-05-14 23:40:10.160038] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77085 ] 00:24:47.132 [2024-05-14 23:40:10.320287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:47.390 [2024-05-14 23:40:10.533129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.390 [2024-05-14 23:40:10.533287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.390 [2024-05-14 23:40:10.533295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.648 [2024-05-14 23:40:10.863058] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:47.906 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:47.906 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@860 -- # return 0 00:24:47.906 23:40:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:47.906 23:40:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:47.906 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.906 23:40:10 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:47.906 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.906 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:47.906 "name": "app_thread", 00:24:47.906 "id": 1, 00:24:47.906 "active_pollers": [], 00:24:47.906 "timed_pollers": [ 00:24:47.906 { 00:24:47.906 "name": "rpc_subsystem_poll_servers", 00:24:47.906 "id": 1, 00:24:47.906 "state": "waiting", 00:24:47.906 "run_count": 0, 00:24:47.906 "busy_count": 0, 00:24:47.906 "period_ticks": 8800000 00:24:47.906 } 00:24:47.906 ], 00:24:47.906 "paused_pollers": [] 00:24:47.906 }' 00:24:47.906 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:47.906 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:47.906 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:47.906 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:47.906 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:24:47.906 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:47.906 23:40:11 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:24:47.907 23:40:11 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:47.907 23:40:11 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:48.165 5000+0 records in 00:24:48.165 5000+0 records out 00:24:48.165 10240000 bytes (10 MB) copied, 0.0209279 s, 489 MB/s 00:24:48.165 23:40:11 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:48.165 AIO0 00:24:48.165 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:48.423 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:48.682 "name": "app_thread", 00:24:48.682 "id": 1, 00:24:48.682 "active_pollers": [], 00:24:48.682 "timed_pollers": [ 00:24:48.682 { 00:24:48.682 "name": "rpc_subsystem_poll_servers", 00:24:48.682 "id": 1, 00:24:48.682 "state": "waiting", 00:24:48.682 "run_count": 0, 00:24:48.682 "busy_count": 0, 00:24:48.682 "period_ticks": 8800000 00:24:48.682 } 00:24:48.682 ], 00:24:48.682 "paused_pollers": [] 00:24:48.682 }' 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:48.682 23:40:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 77085 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@946 -- # '[' -z 77085 ']' 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@950 -- # kill -0 77085 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@951 -- # uname 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77085 00:24:48.682 killing process with pid 77085 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77085' 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@965 -- # kill 77085 00:24:48.682 23:40:11 reap_unregistered_poller -- common/autotest_common.sh@970 -- # wait 77085 00:24:50.059 23:40:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:50.059 23:40:13 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:50.059 ************************************ 00:24:50.059 END TEST reap_unregistered_poller 00:24:50.059 ************************************ 00:24:50.059 00:24:50.059 real 0m3.215s 00:24:50.059 user 0m2.782s 00:24:50.059 sys 0m0.496s 00:24:50.059 23:40:13 reap_unregistered_poller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:50.059 23:40:13 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:50.059 23:40:13 -- spdk/autotest.sh@194 -- # uname -s 00:24:50.059 23:40:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:24:50.059 23:40:13 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:24:50.059 23:40:13 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:24:50.059 23:40:13 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:50.059 23:40:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:50.059 23:40:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:50.059 23:40:13 -- common/autotest_common.sh@10 -- # set +x 00:24:50.059 ************************************ 00:24:50.059 START TEST spdk_dd 00:24:50.059 ************************************ 00:24:50.059 23:40:13 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:50.059 * Looking for test storage... 00:24:50.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:50.059 23:40:13 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.059 23:40:13 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.059 23:40:13 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.059 23:40:13 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.059 23:40:13 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:50.059 23:40:13 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:50.059 23:40:13 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:50.059 23:40:13 spdk_dd -- paths/export.sh@5 -- # export PATH 00:24:50.059 23:40:13 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:50.059 23:40:13 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:50.059 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:24:50.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:24:50.059 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:50.320 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:24:50.320 23:40:13 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:24:50.320 23:40:13 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@230 -- # local class 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@232 -- # local progif 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@233 -- # class=01 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:50.320 23:40:13 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@15 -- # local i 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@24 -- # return 0 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:24:50.321 23:40:13 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:24:50.321 23:40:13 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@139 -- # local lib so 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libdaos.so.2 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libdaos_common.so == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libdfs.so == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libgurt.so.4 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:24:50.321 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libz.so.1 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libisal.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libcart.so.4 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ liblz4.so.1 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libprotobuf-c.so.1 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libyaml-0.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libmercury_hl.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libmercury.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libmercury_util.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libna.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libfabric.so.1 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@143 -- # [[ libpsm2.so.2 == liburing.so.* ]] 00:24:50.322 23:40:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:50.322 23:40:13 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:24:50.322 23:40:13 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:24:50.322 23:40:13 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:50.322 23:40:13 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:50.322 23:40:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:24:50.322 ************************************ 00:24:50.322 START TEST spdk_dd_basic_rw 00:24:50.322 ************************************ 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:24:50.322 * Looking for test storage... 00:24:50.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:50.322 23:40:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:24:50.323 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:24:50.587 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 85 Data Units Written: 204 Host Read Commands: 1687 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:24:50.587 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 85 Data Units Written: 204 Host Read Commands: 1687 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:24:50.588 ************************************ 00:24:50.588 START TEST dd_bs_lt_native_bs 00:24:50.588 ************************************ 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:50.588 23:40:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:50.847 { 00:24:50.847 "subsystems": [ 00:24:50.847 { 00:24:50.847 "subsystem": "bdev", 00:24:50.848 "config": [ 00:24:50.848 { 00:24:50.848 "params": { 00:24:50.848 "trtype": "pcie", 00:24:50.848 "name": "Nvme0", 00:24:50.848 "traddr": "0000:00:10.0" 00:24:50.848 }, 00:24:50.848 "method": "bdev_nvme_attach_controller" 00:24:50.848 }, 00:24:50.848 { 00:24:50.848 "method": "bdev_wait_for_examine" 00:24:50.848 } 00:24:50.848 ] 00:24:50.848 } 00:24:50.848 ] 00:24:50.848 } 00:24:50.848 [2024-05-14 23:40:14.011244] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:50.848 [2024-05-14 23:40:14.011496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77371 ] 00:24:51.105 [2024-05-14 23:40:14.173281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.363 [2024-05-14 23:40:14.407802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.621 [2024-05-14 23:40:14.875011] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:24:51.621 [2024-05-14 23:40:14.875098] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:52.557 [2024-05-14 23:40:15.691597] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:24:52.815 ************************************ 00:24:52.815 END TEST dd_bs_lt_native_bs 00:24:52.815 ************************************ 00:24:52.815 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:24:52.815 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:52.815 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:24:52.815 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:24:52.815 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:24:52.815 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:52.815 00:24:52.815 real 0m2.205s 00:24:52.815 user 0m1.820s 00:24:52.815 sys 0m0.249s 00:24:52.815 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:52.815 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:24:53.756 ************************************ 00:24:53.756 START TEST dd_rw 00:24:53.756 ************************************ 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:24:53.756 23:40:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:24:54.694 23:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:24:54.694 23:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:24:54.694 23:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:24:54.694 23:40:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:24:54.694 { 00:24:54.694 "subsystems": [ 00:24:54.694 { 00:24:54.694 "subsystem": "bdev", 00:24:54.694 "config": [ 00:24:54.694 { 00:24:54.694 "params": { 00:24:54.694 "trtype": "pcie", 00:24:54.694 "name": "Nvme0", 00:24:54.694 "traddr": "0000:00:10.0" 00:24:54.694 }, 00:24:54.694 "method": "bdev_nvme_attach_controller" 00:24:54.694 }, 00:24:54.694 { 00:24:54.694 "method": "bdev_wait_for_examine" 00:24:54.694 } 00:24:54.694 ] 00:24:54.694 } 00:24:54.694 ] 00:24:54.694 } 00:24:54.694 [2024-05-14 23:40:17.883684] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:54.694 [2024-05-14 23:40:17.883904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77456 ] 00:24:54.953 [2024-05-14 23:40:18.051766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.212 [2024-05-14 23:40:18.280454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.848  Copying: 60/60 [kB] (average 29 MBps) 00:24:56.848 00:24:56.848 23:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:24:56.848 23:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:24:56.848 23:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:24:56.848 23:40:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:24:56.848 { 00:24:56.848 "subsystems": [ 00:24:56.848 { 00:24:56.848 "subsystem": "bdev", 00:24:56.848 "config": [ 00:24:56.848 { 00:24:56.848 "params": { 00:24:56.848 "trtype": "pcie", 00:24:56.848 "name": "Nvme0", 00:24:56.848 "traddr": "0000:00:10.0" 00:24:56.848 }, 00:24:56.848 "method": "bdev_nvme_attach_controller" 00:24:56.848 }, 00:24:56.848 { 00:24:56.848 "method": "bdev_wait_for_examine" 00:24:56.848 } 00:24:56.848 ] 00:24:56.848 } 00:24:56.848 ] 00:24:56.848 } 00:24:57.106 [2024-05-14 23:40:20.215647] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:57.106 [2024-05-14 23:40:20.215864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77497 ] 00:24:57.106 [2024-05-14 23:40:20.380411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.674 [2024-05-14 23:40:20.659954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.369  Copying: 60/60 [kB] (average 19 MBps) 00:24:59.369 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:24:59.369 23:40:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:24:59.369 { 00:24:59.369 "subsystems": [ 00:24:59.369 { 00:24:59.369 "subsystem": "bdev", 00:24:59.369 "config": [ 00:24:59.369 { 00:24:59.369 "params": { 00:24:59.369 "trtype": "pcie", 00:24:59.369 "name": "Nvme0", 00:24:59.369 "traddr": "0000:00:10.0" 00:24:59.369 }, 00:24:59.369 "method": "bdev_nvme_attach_controller" 00:24:59.369 }, 00:24:59.369 { 00:24:59.369 "method": "bdev_wait_for_examine" 00:24:59.369 } 00:24:59.369 ] 00:24:59.369 } 00:24:59.369 ] 00:24:59.369 } 00:24:59.369 [2024-05-14 23:40:22.580587] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:24:59.369 [2024-05-14 23:40:22.580759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77533 ] 00:24:59.628 [2024-05-14 23:40:22.735833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.887 [2024-05-14 23:40:22.976836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.393  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:01.393 00:25:01.393 23:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:01.393 23:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:25:01.393 23:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:25:01.393 23:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:25:01.393 23:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:01.393 23:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:25:01.393 23:40:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:02.330 23:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:25:02.330 23:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:25:02.330 23:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:02.330 23:40:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:02.330 { 00:25:02.330 "subsystems": [ 00:25:02.330 { 00:25:02.330 "subsystem": "bdev", 00:25:02.330 "config": [ 00:25:02.330 { 00:25:02.330 "params": { 00:25:02.330 "trtype": "pcie", 00:25:02.330 "name": "Nvme0", 00:25:02.330 "traddr": "0000:00:10.0" 00:25:02.330 }, 00:25:02.330 "method": "bdev_nvme_attach_controller" 00:25:02.330 }, 00:25:02.330 { 00:25:02.330 "method": "bdev_wait_for_examine" 00:25:02.330 } 00:25:02.330 ] 00:25:02.330 } 00:25:02.330 ] 00:25:02.330 } 00:25:02.589 [2024-05-14 23:40:25.621079] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:02.589 [2024-05-14 23:40:25.621415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77573 ] 00:25:02.589 [2024-05-14 23:40:25.771035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.848 [2024-05-14 23:40:25.988009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.351  Copying: 60/60 [kB] (average 58 MBps) 00:25:04.351 00:25:04.351 23:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:25:04.351 23:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:25:04.351 23:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:04.351 23:40:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:04.609 { 00:25:04.609 "subsystems": [ 00:25:04.609 { 00:25:04.609 "subsystem": "bdev", 00:25:04.609 "config": [ 00:25:04.609 { 00:25:04.609 "params": { 00:25:04.609 "trtype": "pcie", 00:25:04.609 "name": "Nvme0", 00:25:04.609 "traddr": "0000:00:10.0" 00:25:04.609 }, 00:25:04.609 "method": "bdev_nvme_attach_controller" 00:25:04.609 }, 00:25:04.609 { 00:25:04.609 "method": "bdev_wait_for_examine" 00:25:04.609 } 00:25:04.609 ] 00:25:04.609 } 00:25:04.609 ] 00:25:04.609 } 00:25:04.609 [2024-05-14 23:40:27.739456] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:04.609 [2024-05-14 23:40:27.739627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77604 ] 00:25:04.609 [2024-05-14 23:40:27.891351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.868 [2024-05-14 23:40:28.096484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.819  Copying: 60/60 [kB] (average 58 MBps) 00:25:06.819 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:06.819 23:40:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:06.819 { 00:25:06.819 "subsystems": [ 00:25:06.819 { 00:25:06.819 "subsystem": "bdev", 00:25:06.819 "config": [ 00:25:06.819 { 00:25:06.819 "params": { 00:25:06.819 "trtype": "pcie", 00:25:06.819 "name": "Nvme0", 00:25:06.819 "traddr": "0000:00:10.0" 00:25:06.819 }, 00:25:06.819 "method": "bdev_nvme_attach_controller" 00:25:06.819 }, 00:25:06.819 { 00:25:06.819 "method": "bdev_wait_for_examine" 00:25:06.819 } 00:25:06.819 ] 00:25:06.819 } 00:25:06.819 ] 00:25:06.819 } 00:25:06.819 [2024-05-14 23:40:29.872583] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:06.819 [2024-05-14 23:40:29.872779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77633 ] 00:25:06.819 [2024-05-14 23:40:30.022099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.080 [2024-05-14 23:40:30.236748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.622  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:08.622 00:25:08.622 23:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:08.622 23:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:08.622 23:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:25:08.622 23:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:25:08.622 23:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:25:08.622 23:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:08.622 23:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:25:08.622 23:40:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:09.554 23:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:25:09.554 23:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:25:09.554 23:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:09.554 23:40:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:09.554 { 00:25:09.554 "subsystems": [ 00:25:09.554 { 00:25:09.554 "subsystem": "bdev", 00:25:09.554 "config": [ 00:25:09.554 { 00:25:09.554 "params": { 00:25:09.554 "trtype": "pcie", 00:25:09.554 "name": "Nvme0", 00:25:09.554 "traddr": "0000:00:10.0" 00:25:09.554 }, 00:25:09.554 "method": "bdev_nvme_attach_controller" 00:25:09.554 }, 00:25:09.554 { 00:25:09.554 "method": "bdev_wait_for_examine" 00:25:09.554 } 00:25:09.554 ] 00:25:09.554 } 00:25:09.554 ] 00:25:09.554 } 00:25:09.554 [2024-05-14 23:40:32.712533] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:09.554 [2024-05-14 23:40:32.712697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77675 ] 00:25:09.811 [2024-05-14 23:40:32.873445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.811 [2024-05-14 23:40:33.075761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.740  Copying: 56/56 [kB] (average 54 MBps) 00:25:11.740 00:25:11.740 23:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:25:11.740 23:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:25:11.740 23:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:11.740 23:40:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:11.740 { 00:25:11.740 "subsystems": [ 00:25:11.740 { 00:25:11.740 "subsystem": "bdev", 00:25:11.740 "config": [ 00:25:11.740 { 00:25:11.740 "params": { 00:25:11.740 "trtype": "pcie", 00:25:11.740 "name": "Nvme0", 00:25:11.740 "traddr": "0000:00:10.0" 00:25:11.740 }, 00:25:11.740 "method": "bdev_nvme_attach_controller" 00:25:11.740 }, 00:25:11.740 { 00:25:11.740 "method": "bdev_wait_for_examine" 00:25:11.740 } 00:25:11.740 ] 00:25:11.740 } 00:25:11.740 ] 00:25:11.740 } 00:25:11.740 [2024-05-14 23:40:34.880760] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:11.740 [2024-05-14 23:40:34.880950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77707 ] 00:25:11.998 [2024-05-14 23:40:35.030003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.998 [2024-05-14 23:40:35.236933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.971  Copying: 56/56 [kB] (average 54 MBps) 00:25:13.971 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:13.971 23:40:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:13.971 { 00:25:13.971 "subsystems": [ 00:25:13.971 { 00:25:13.971 "subsystem": "bdev", 00:25:13.971 "config": [ 00:25:13.971 { 00:25:13.971 "params": { 00:25:13.971 "trtype": "pcie", 00:25:13.971 "name": "Nvme0", 00:25:13.971 "traddr": "0000:00:10.0" 00:25:13.971 }, 00:25:13.971 "method": "bdev_nvme_attach_controller" 00:25:13.971 }, 00:25:13.971 { 00:25:13.971 "method": "bdev_wait_for_examine" 00:25:13.971 } 00:25:13.971 ] 00:25:13.971 } 00:25:13.971 ] 00:25:13.971 } 00:25:13.971 [2024-05-14 23:40:37.021857] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:13.971 [2024-05-14 23:40:37.022036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77740 ] 00:25:13.971 [2024-05-14 23:40:37.183763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.228 [2024-05-14 23:40:37.395667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.728  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:15.728 00:25:15.987 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:15.987 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:25:15.987 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:25:15.987 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:25:15.987 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:15.987 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:25:15.987 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:16.553 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:25:16.553 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:25:16.553 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:16.553 23:40:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:16.553 { 00:25:16.553 "subsystems": [ 00:25:16.553 { 00:25:16.553 "subsystem": "bdev", 00:25:16.553 "config": [ 00:25:16.553 { 00:25:16.553 "params": { 00:25:16.553 "trtype": "pcie", 00:25:16.553 "name": "Nvme0", 00:25:16.553 "traddr": "0000:00:10.0" 00:25:16.553 }, 00:25:16.553 "method": "bdev_nvme_attach_controller" 00:25:16.553 }, 00:25:16.553 { 00:25:16.553 "method": "bdev_wait_for_examine" 00:25:16.553 } 00:25:16.553 ] 00:25:16.553 } 00:25:16.553 ] 00:25:16.553 } 00:25:16.818 [2024-05-14 23:40:39.887737] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:16.818 [2024-05-14 23:40:39.887943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77779 ] 00:25:16.818 [2024-05-14 23:40:40.052932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.076 [2024-05-14 23:40:40.301808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.019  Copying: 56/56 [kB] (average 54 MBps) 00:25:19.019 00:25:19.019 23:40:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:25:19.019 23:40:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:25:19.019 23:40:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:19.019 23:40:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:19.019 { 00:25:19.019 "subsystems": [ 00:25:19.019 { 00:25:19.019 "subsystem": "bdev", 00:25:19.019 "config": [ 00:25:19.019 { 00:25:19.019 "params": { 00:25:19.019 "trtype": "pcie", 00:25:19.019 "name": "Nvme0", 00:25:19.019 "traddr": "0000:00:10.0" 00:25:19.019 }, 00:25:19.019 "method": "bdev_nvme_attach_controller" 00:25:19.019 }, 00:25:19.019 { 00:25:19.019 "method": "bdev_wait_for_examine" 00:25:19.019 } 00:25:19.019 ] 00:25:19.019 } 00:25:19.019 ] 00:25:19.019 } 00:25:19.019 [2024-05-14 23:40:42.085182] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:19.019 [2024-05-14 23:40:42.085367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77812 ] 00:25:19.019 [2024-05-14 23:40:42.241229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.278 [2024-05-14 23:40:42.455109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.816  Copying: 56/56 [kB] (average 54 MBps) 00:25:20.816 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:20.816 23:40:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:21.075 { 00:25:21.075 "subsystems": [ 00:25:21.075 { 00:25:21.075 "subsystem": "bdev", 00:25:21.075 "config": [ 00:25:21.075 { 00:25:21.075 "params": { 00:25:21.075 "trtype": "pcie", 00:25:21.075 "name": "Nvme0", 00:25:21.075 "traddr": "0000:00:10.0" 00:25:21.075 }, 00:25:21.075 "method": "bdev_nvme_attach_controller" 00:25:21.075 }, 00:25:21.075 { 00:25:21.075 "method": "bdev_wait_for_examine" 00:25:21.075 } 00:25:21.075 ] 00:25:21.075 } 00:25:21.075 ] 00:25:21.075 } 00:25:21.075 [2024-05-14 23:40:44.235812] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:21.075 [2024-05-14 23:40:44.236022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77843 ] 00:25:21.333 [2024-05-14 23:40:44.398891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.593 [2024-05-14 23:40:44.675875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.225  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:23.225 00:25:23.225 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:23.225 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:23.225 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:25:23.225 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:25:23.225 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:25:23.225 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:23.225 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:25:23.225 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:23.792 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:25:23.792 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:25:23.792 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:23.792 23:40:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:23.792 { 00:25:23.792 "subsystems": [ 00:25:23.792 { 00:25:23.792 "subsystem": "bdev", 00:25:23.792 "config": [ 00:25:23.792 { 00:25:23.792 "params": { 00:25:23.792 "trtype": "pcie", 00:25:23.792 "name": "Nvme0", 00:25:23.792 "traddr": "0000:00:10.0" 00:25:23.792 }, 00:25:23.792 "method": "bdev_nvme_attach_controller" 00:25:23.792 }, 00:25:23.792 { 00:25:23.792 "method": "bdev_wait_for_examine" 00:25:23.792 } 00:25:23.792 ] 00:25:23.792 } 00:25:23.792 ] 00:25:23.792 } 00:25:23.792 [2024-05-14 23:40:47.036350] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:23.792 [2024-05-14 23:40:47.036635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77882 ] 00:25:24.050 [2024-05-14 23:40:47.214975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.308 [2024-05-14 23:40:47.455650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.808  Copying: 48/48 [kB] (average 46 MBps) 00:25:25.808 00:25:25.808 23:40:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:25:25.808 23:40:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:25:25.808 23:40:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:25.808 23:40:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:26.066 { 00:25:26.066 "subsystems": [ 00:25:26.066 { 00:25:26.066 "subsystem": "bdev", 00:25:26.066 "config": [ 00:25:26.066 { 00:25:26.066 "params": { 00:25:26.066 "trtype": "pcie", 00:25:26.066 "name": "Nvme0", 00:25:26.066 "traddr": "0000:00:10.0" 00:25:26.066 }, 00:25:26.066 "method": "bdev_nvme_attach_controller" 00:25:26.066 }, 00:25:26.066 { 00:25:26.066 "method": "bdev_wait_for_examine" 00:25:26.066 } 00:25:26.066 ] 00:25:26.066 } 00:25:26.066 ] 00:25:26.066 } 00:25:26.066 [2024-05-14 23:40:49.224214] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:26.066 [2024-05-14 23:40:49.224390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77914 ] 00:25:26.324 [2024-05-14 23:40:49.380846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.324 [2024-05-14 23:40:49.589486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.263  Copying: 48/48 [kB] (average 46 MBps) 00:25:28.263 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:28.263 23:40:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:28.263 { 00:25:28.263 "subsystems": [ 00:25:28.263 { 00:25:28.263 "subsystem": "bdev", 00:25:28.263 "config": [ 00:25:28.263 { 00:25:28.263 "params": { 00:25:28.263 "trtype": "pcie", 00:25:28.263 "name": "Nvme0", 00:25:28.263 "traddr": "0000:00:10.0" 00:25:28.263 }, 00:25:28.263 "method": "bdev_nvme_attach_controller" 00:25:28.264 }, 00:25:28.264 { 00:25:28.264 "method": "bdev_wait_for_examine" 00:25:28.264 } 00:25:28.264 ] 00:25:28.264 } 00:25:28.264 ] 00:25:28.264 } 00:25:28.264 [2024-05-14 23:40:51.358541] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:28.264 [2024-05-14 23:40:51.358723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77947 ] 00:25:28.264 [2024-05-14 23:40:51.509999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.521 [2024-05-14 23:40:51.722711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.456  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:30.456 00:25:30.456 23:40:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:30.456 23:40:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:25:30.456 23:40:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:25:30.456 23:40:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:25:30.456 23:40:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:30.456 23:40:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:25:30.456 23:40:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:31.022 23:40:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:25:31.022 23:40:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:25:31.022 23:40:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:31.022 23:40:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:31.022 { 00:25:31.022 "subsystems": [ 00:25:31.022 { 00:25:31.022 "subsystem": "bdev", 00:25:31.022 "config": [ 00:25:31.022 { 00:25:31.022 "params": { 00:25:31.022 "trtype": "pcie", 00:25:31.022 "name": "Nvme0", 00:25:31.022 "traddr": "0000:00:10.0" 00:25:31.022 }, 00:25:31.022 "method": "bdev_nvme_attach_controller" 00:25:31.022 }, 00:25:31.022 { 00:25:31.022 "method": "bdev_wait_for_examine" 00:25:31.022 } 00:25:31.022 ] 00:25:31.022 } 00:25:31.022 ] 00:25:31.022 } 00:25:31.022 [2024-05-14 23:40:54.150847] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:31.022 [2024-05-14 23:40:54.151056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77988 ] 00:25:31.279 [2024-05-14 23:40:54.312808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.279 [2024-05-14 23:40:54.561605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.219  Copying: 48/48 [kB] (average 46 MBps) 00:25:33.219 00:25:33.219 23:40:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:25:33.219 23:40:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:25:33.219 23:40:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:33.219 23:40:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:33.219 { 00:25:33.219 "subsystems": [ 00:25:33.220 { 00:25:33.220 "subsystem": "bdev", 00:25:33.220 "config": [ 00:25:33.220 { 00:25:33.220 "params": { 00:25:33.220 "trtype": "pcie", 00:25:33.220 "name": "Nvme0", 00:25:33.220 "traddr": "0000:00:10.0" 00:25:33.220 }, 00:25:33.220 "method": "bdev_nvme_attach_controller" 00:25:33.220 }, 00:25:33.220 { 00:25:33.220 "method": "bdev_wait_for_examine" 00:25:33.220 } 00:25:33.220 ] 00:25:33.220 } 00:25:33.220 ] 00:25:33.220 } 00:25:33.220 [2024-05-14 23:40:56.350558] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:33.220 [2024-05-14 23:40:56.350751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78019 ] 00:25:33.220 [2024-05-14 23:40:56.504785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.478 [2024-05-14 23:40:56.724419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.420  Copying: 48/48 [kB] (average 46 MBps) 00:25:35.420 00:25:35.420 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:35.420 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:35.420 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:35.420 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:25:35.420 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:25:35.420 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:25:35.420 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:25:35.421 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:35.421 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:25:35.421 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:35.421 23:40:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:35.421 { 00:25:35.421 "subsystems": [ 00:25:35.421 { 00:25:35.421 "subsystem": "bdev", 00:25:35.421 "config": [ 00:25:35.421 { 00:25:35.421 "params": { 00:25:35.421 "trtype": "pcie", 00:25:35.421 "name": "Nvme0", 00:25:35.421 "traddr": "0000:00:10.0" 00:25:35.421 }, 00:25:35.421 "method": "bdev_nvme_attach_controller" 00:25:35.421 }, 00:25:35.421 { 00:25:35.421 "method": "bdev_wait_for_examine" 00:25:35.421 } 00:25:35.421 ] 00:25:35.421 } 00:25:35.421 ] 00:25:35.421 } 00:25:35.421 [2024-05-14 23:40:58.522908] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:35.421 [2024-05-14 23:40:58.523088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78048 ] 00:25:35.421 [2024-05-14 23:40:58.673593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.678 [2024-05-14 23:40:58.899447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.620  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:37.620 00:25:37.620 00:25:37.620 real 0m43.560s 00:25:37.620 user 0m36.331s 00:25:37.620 sys 0m4.797s 00:25:37.620 ************************************ 00:25:37.620 END TEST dd_rw 00:25:37.620 ************************************ 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:25:37.620 ************************************ 00:25:37.620 START TEST dd_rw_offset 00:25:37.620 ************************************ 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=4wqh220hnsdi526degerog8d1yv85tri4bvocnh6osqq50ki6ypbknxvqz5u33pspicnry2nq4obyne69gaqjst8nm4vsl1qebdpwpphmsafb9fn5ooxgohdxk48i5vzyncbg2eu94gdrzls85vjj5823f8hj84ry25xgi4xmatpupqo96xbps4wsrxtfahbg8y5ya8ng15cq19kjysvlw0d7yixrznkqhgr8n1bn90koehsdgy5ev8y8lgxkq4liz5kf2s3oqulqyf6f6gjw3fs4iznpll0laj11g1kmw2zlzxk4ospipdp10vpplojcfn9ozx7m6ibl3ttivgfppp02wluiaxb9kyww4y6ps2rrfajwff9w9nr960zajlflcqstkbqw2741wjqsr1xdviq02xxjpck4heursb41qw0vfv30z7qucbex24t976w64pcneuhqaqpcmrxqbi9onz14j1txu9agv25hyjcmtd5u13ebyco64i5stvjmc5c9nicy3vknauarhbsq0xqwyzli42pvtgam00efpy0r8i4oduxrks8zy72o4fzjhg0javuw529f9czrnm71nluqj7untn4wtd3p306fdnbkkwnnhpq6hltf9ctk1n5dbigg6n4pf1yd7hse89bf18fywxzj35kf0njpskau5wds7mnsxm0rz33cqwszsx5iva2wt4ixjxipzvx1185gn6fwfilapwszoaq8rxwb5nwwjk4h0n7hihqy18vspbvccbr5uyb2xyedd85omx4p2zquue6dxr4cjfjc3r4lo8dqazmco7wjx4ni7nexys5fls9inahj95s02btab4l2cc5olhz7f0nuva5fewwycrio38rffvhitfbit2wcunvam9fog0ze7ciadp7ykd0tpg8mzbnkppjm97vv9no2k2nqs6v52e0txo9moa5c4clx1dp13rsvchxaaycj2n6ue0e3ec3jdtthn3j9m6wljn6ijylagl5ux6go1d6dtlcm7krg57mo0e8v16mhihau8jmwxu5bhp8n1h21cfpk8li07jinkgnp5p66qfeqo2q14ttu040uspx0tevzoflp489hlw66yeguqtf9ftbtihcdtwkzgmnx7vjdthyno32ab41bjdbov5g3tjunhhny2exbo83c69ussqknn5u3596oatl4hjmrcd67w48b3ynz3a3g063zt08sebuwk91a3f6fvflsh7klzlq2awle6n71q3g3g662zbmbkkazlkdmrkvjqlmfdtxe0rda7b5ocke2xm9ata35186hwfljepi5a09kq4ioefmitofvjtpiuwkiexx3scp0o8ubfacxc56fvg2he69dzbc7kf7zex8lle7bkqmk6jk70ajtxcx9wdozjrynaf5liv5qftb0ii2lifbpd5pokego7oobd3xlhjonjfc6ykbtsaubfj6otxs1b5cc89yoqri7t7rxsnxqro5yt1lxfc2lzpt8bfab8zlhbo0c5rt83zl4vejk35s7xylkep4g5ehtveuondczp9yee4jzy6w3bewtohnnnqnmvzre6itwac8csfl6z4hv3krq25edwfdwfk74dthxzx4yepsgc7ufzr2m68zu87vspf9vfvqz3w3pja8qba1wiqbgbk0t7z1xqv65b7a2ip3frrax4agmmk97haav9690a8ypaa6ujdera19dl0fmfawjk79itdvgu25l2v4uxoa61r4bsqu6j0rpje66z7qtlk8w6xwo3gg4h94smakgpwl7isfqbzjqnjyjtn9k0mcboey6lakgzif9e11to2n4z7igyrxjgphu797gnyq2iuglijsijp4r2h3wiwah5ypvpqhr9huzjgi9peuudbisq73x277vox07amr5ktwrqp1byounz9vvdr3eouhjxn4ssstcjqo4beofknypimvmvgmmcjho9baipfuc06u9jptku9ps8lwqwumekzfaddidif6oi32erjuz8hh67lnwwquuufmdxigv4eitkmpqpssbjooy7ua5vmo4x6tnb104pn1cc03luhc7c9u8dt82z3jcssmxool48fy4rx5bszbr6f9h5vn5lb9fca7wwh0utmul0vmc1iubr5cw6liwg535d27id8ognc68c8b76x13ixyvzdocgdy88gsgl30xbnncqwujg4m5sd75cd60e71gws4p9z48l5ugg8yjufqnosfrhc0p7ht2y9u0z7prg5l0j19zob3rz5socp4nohpqusb312hzyhb38h6to0hpuzgs4oqaaxy9rjzzchx8d1bojyh4p15nvxvl1y172t1uk38ggj26afkjh8ecs8t51w41b7jk7piep37nmz1fp1yyv2750362qeve1dfekqj6wdj83s48zs3x2s3aofeg59cw19wyyb7vtdha6eurk0a129yitn6qkvo8u1ve2uc8h7lliugdt3rjb20tk77yd1wbtonu9v59qjb5spw1pfwkbg4jdy6pybubp8z43nmt0odz49mbw2zu2uz25dxcn09frv5d86fizdobcuvbrk6yvrtg9sva9aoiix1gyhn7f80x0exncuyvkut2eztloxdwyptdfrwxrnxanwh0xyxg5wg580qnd8sxy4ad0pw8vg52i64uilpcjhvfun3em67ddrzsws60h4b0imga8hu97h0e17t68dxkorhxyft6q20osrumwjv7gcfok8pff74daa8gvb0ft6fql0ailsyyv555iao7l6chbidn80eupfhod3ht83lrqzr86wtatw6l3ntn6t4uzmn8prs8zn8w0nh59pnadz5efjet127n1qcq7krlew6evdst1stioitzkuwo01dv4vd1sw22k6rouvexm5jf9ka97k7hzr30uoqm5flzmyid8qvzdlexreeyp9z84k8c5uzh241kjcpx2of18n733s9mvmieiv6kaexzu7om2u2smq7esn85tm2m5h8wwb23pwtvfxro2pfsf9uqr77zsbht7cijimwn9podqajj6boudvg76np550qqkmre0tty92rbz4o9sgis83dlq019uuzjma3nc88w00zfglisamz17pqpfpvkb75ult4cvoboigpba16tcte0pls7bt2qfitf1i4gkz0xx6oqcvnw7djafzs698h11bs6qh58ctiqdex0fpxfvpyim8s4v4kfp10cw4ssluhnhlsaxxfxwi39to7zqcixft2n53i4922rxl6lwtxp5po3dqie6hdp1kx9pez7ewyohnymq92iz2lo9mqhtv1g6pz20cwtipdczedr0sj1qorwb1lqzh2moh4n5eiuebs8wp6ihbidoj403ady9xivf5g1tin7l6dnhx787bs7fuvp5t9law1z2n0tyvq57ll89cbbc43fd5gxr2merdy9bxbtkuerahwrt39718a5mavbi3ibm3ydme56de71ksgu70sva4fwyykq73odd1xyactwzl4pm1ei9z6fcf2tju0di61m4miksgj71958yxoquhrwkpmlp3s2c3wxsh1sa3oa1hawl24aylzyngb1gji059dwg4nztb1iuegdaq8jp151zojll1v1g63qvclzdv20rh2ylbqiq5gxgshx5zok8atuvsc13ifzau05zwsqab1y012uqd1dnjaksm43sug5wuehe1981qjnoe8prwrl9ng5e65m4lpuguvg6b5lk2i17nbdjxrkngxdxonovcyvhov9uqyitnqqf10rqhdkkva2kox2gqz5zwj0bgn34zxe6zeau6vpno5gwtskwshmflqxc0s2pcieu8v0cdfu28yx4mnopop89pg0eiimliawft4wh0b4m7h8cpu8hjzoa8mfnrek80jb2e88pshu9c7msigtkx0232l5xny03gtlxb870nuaac66t1g3lm0o8a5fgtewtdnwttugw26ecy3b4zntbuy1tley81gvp0z6qxty5a4jz5j3rnwxnhhwzne50llsbzlyyw3p54muj67qjfwvw6qqyt3v7fbm8vf46ee7a8awr4p 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:25:37.620 23:41:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:25:37.620 { 00:25:37.620 "subsystems": [ 00:25:37.620 { 00:25:37.620 "subsystem": "bdev", 00:25:37.620 "config": [ 00:25:37.620 { 00:25:37.620 "params": { 00:25:37.620 "trtype": "pcie", 00:25:37.620 "name": "Nvme0", 00:25:37.620 "traddr": "0000:00:10.0" 00:25:37.620 }, 00:25:37.620 "method": "bdev_nvme_attach_controller" 00:25:37.620 }, 00:25:37.620 { 00:25:37.620 "method": "bdev_wait_for_examine" 00:25:37.620 } 00:25:37.620 ] 00:25:37.620 } 00:25:37.620 ] 00:25:37.620 } 00:25:37.620 [2024-05-14 23:41:00.771772] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:37.620 [2024-05-14 23:41:00.771958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78100 ] 00:25:37.880 [2024-05-14 23:41:00.923873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.880 [2024-05-14 23:41:01.138363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.821  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:39.821 00:25:39.821 23:41:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:25:39.821 23:41:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:25:39.821 23:41:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:25:39.821 23:41:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:25:39.821 { 00:25:39.821 "subsystems": [ 00:25:39.821 { 00:25:39.821 "subsystem": "bdev", 00:25:39.821 "config": [ 00:25:39.821 { 00:25:39.821 "params": { 00:25:39.821 "trtype": "pcie", 00:25:39.821 "name": "Nvme0", 00:25:39.821 "traddr": "0000:00:10.0" 00:25:39.821 }, 00:25:39.821 "method": "bdev_nvme_attach_controller" 00:25:39.821 }, 00:25:39.821 { 00:25:39.821 "method": "bdev_wait_for_examine" 00:25:39.821 } 00:25:39.821 ] 00:25:39.821 } 00:25:39.821 ] 00:25:39.821 } 00:25:39.821 [2024-05-14 23:41:02.933751] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:39.821 [2024-05-14 23:41:02.933924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78146 ] 00:25:39.821 [2024-05-14 23:41:03.088623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.080 [2024-05-14 23:41:03.309142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.034  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:42.034 00:25:42.034 23:41:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:25:42.035 23:41:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 4wqh220hnsdi526degerog8d1yv85tri4bvocnh6osqq50ki6ypbknxvqz5u33pspicnry2nq4obyne69gaqjst8nm4vsl1qebdpwpphmsafb9fn5ooxgohdxk48i5vzyncbg2eu94gdrzls85vjj5823f8hj84ry25xgi4xmatpupqo96xbps4wsrxtfahbg8y5ya8ng15cq19kjysvlw0d7yixrznkqhgr8n1bn90koehsdgy5ev8y8lgxkq4liz5kf2s3oqulqyf6f6gjw3fs4iznpll0laj11g1kmw2zlzxk4ospipdp10vpplojcfn9ozx7m6ibl3ttivgfppp02wluiaxb9kyww4y6ps2rrfajwff9w9nr960zajlflcqstkbqw2741wjqsr1xdviq02xxjpck4heursb41qw0vfv30z7qucbex24t976w64pcneuhqaqpcmrxqbi9onz14j1txu9agv25hyjcmtd5u13ebyco64i5stvjmc5c9nicy3vknauarhbsq0xqwyzli42pvtgam00efpy0r8i4oduxrks8zy72o4fzjhg0javuw529f9czrnm71nluqj7untn4wtd3p306fdnbkkwnnhpq6hltf9ctk1n5dbigg6n4pf1yd7hse89bf18fywxzj35kf0njpskau5wds7mnsxm0rz33cqwszsx5iva2wt4ixjxipzvx1185gn6fwfilapwszoaq8rxwb5nwwjk4h0n7hihqy18vspbvccbr5uyb2xyedd85omx4p2zquue6dxr4cjfjc3r4lo8dqazmco7wjx4ni7nexys5fls9inahj95s02btab4l2cc5olhz7f0nuva5fewwycrio38rffvhitfbit2wcunvam9fog0ze7ciadp7ykd0tpg8mzbnkppjm97vv9no2k2nqs6v52e0txo9moa5c4clx1dp13rsvchxaaycj2n6ue0e3ec3jdtthn3j9m6wljn6ijylagl5ux6go1d6dtlcm7krg57mo0e8v16mhihau8jmwxu5bhp8n1h21cfpk8li07jinkgnp5p66qfeqo2q14ttu040uspx0tevzoflp489hlw66yeguqtf9ftbtihcdtwkzgmnx7vjdthyno32ab41bjdbov5g3tjunhhny2exbo83c69ussqknn5u3596oatl4hjmrcd67w48b3ynz3a3g063zt08sebuwk91a3f6fvflsh7klzlq2awle6n71q3g3g662zbmbkkazlkdmrkvjqlmfdtxe0rda7b5ocke2xm9ata35186hwfljepi5a09kq4ioefmitofvjtpiuwkiexx3scp0o8ubfacxc56fvg2he69dzbc7kf7zex8lle7bkqmk6jk70ajtxcx9wdozjrynaf5liv5qftb0ii2lifbpd5pokego7oobd3xlhjonjfc6ykbtsaubfj6otxs1b5cc89yoqri7t7rxsnxqro5yt1lxfc2lzpt8bfab8zlhbo0c5rt83zl4vejk35s7xylkep4g5ehtveuondczp9yee4jzy6w3bewtohnnnqnmvzre6itwac8csfl6z4hv3krq25edwfdwfk74dthxzx4yepsgc7ufzr2m68zu87vspf9vfvqz3w3pja8qba1wiqbgbk0t7z1xqv65b7a2ip3frrax4agmmk97haav9690a8ypaa6ujdera19dl0fmfawjk79itdvgu25l2v4uxoa61r4bsqu6j0rpje66z7qtlk8w6xwo3gg4h94smakgpwl7isfqbzjqnjyjtn9k0mcboey6lakgzif9e11to2n4z7igyrxjgphu797gnyq2iuglijsijp4r2h3wiwah5ypvpqhr9huzjgi9peuudbisq73x277vox07amr5ktwrqp1byounz9vvdr3eouhjxn4ssstcjqo4beofknypimvmvgmmcjho9baipfuc06u9jptku9ps8lwqwumekzfaddidif6oi32erjuz8hh67lnwwquuufmdxigv4eitkmpqpssbjooy7ua5vmo4x6tnb104pn1cc03luhc7c9u8dt82z3jcssmxool48fy4rx5bszbr6f9h5vn5lb9fca7wwh0utmul0vmc1iubr5cw6liwg535d27id8ognc68c8b76x13ixyvzdocgdy88gsgl30xbnncqwujg4m5sd75cd60e71gws4p9z48l5ugg8yjufqnosfrhc0p7ht2y9u0z7prg5l0j19zob3rz5socp4nohpqusb312hzyhb38h6to0hpuzgs4oqaaxy9rjzzchx8d1bojyh4p15nvxvl1y172t1uk38ggj26afkjh8ecs8t51w41b7jk7piep37nmz1fp1yyv2750362qeve1dfekqj6wdj83s48zs3x2s3aofeg59cw19wyyb7vtdha6eurk0a129yitn6qkvo8u1ve2uc8h7lliugdt3rjb20tk77yd1wbtonu9v59qjb5spw1pfwkbg4jdy6pybubp8z43nmt0odz49mbw2zu2uz25dxcn09frv5d86fizdobcuvbrk6yvrtg9sva9aoiix1gyhn7f80x0exncuyvkut2eztloxdwyptdfrwxrnxanwh0xyxg5wg580qnd8sxy4ad0pw8vg52i64uilpcjhvfun3em67ddrzsws60h4b0imga8hu97h0e17t68dxkorhxyft6q20osrumwjv7gcfok8pff74daa8gvb0ft6fql0ailsyyv555iao7l6chbidn80eupfhod3ht83lrqzr86wtatw6l3ntn6t4uzmn8prs8zn8w0nh59pnadz5efjet127n1qcq7krlew6evdst1stioitzkuwo01dv4vd1sw22k6rouvexm5jf9ka97k7hzr30uoqm5flzmyid8qvzdlexreeyp9z84k8c5uzh241kjcpx2of18n733s9mvmieiv6kaexzu7om2u2smq7esn85tm2m5h8wwb23pwtvfxro2pfsf9uqr77zsbht7cijimwn9podqajj6boudvg76np550qqkmre0tty92rbz4o9sgis83dlq019uuzjma3nc88w00zfglisamz17pqpfpvkb75ult4cvoboigpba16tcte0pls7bt2qfitf1i4gkz0xx6oqcvnw7djafzs698h11bs6qh58ctiqdex0fpxfvpyim8s4v4kfp10cw4ssluhnhlsaxxfxwi39to7zqcixft2n53i4922rxl6lwtxp5po3dqie6hdp1kx9pez7ewyohnymq92iz2lo9mqhtv1g6pz20cwtipdczedr0sj1qorwb1lqzh2moh4n5eiuebs8wp6ihbidoj403ady9xivf5g1tin7l6dnhx787bs7fuvp5t9law1z2n0tyvq57ll89cbbc43fd5gxr2merdy9bxbtkuerahwrt39718a5mavbi3ibm3ydme56de71ksgu70sva4fwyykq73odd1xyactwzl4pm1ei9z6fcf2tju0di61m4miksgj71958yxoquhrwkpmlp3s2c3wxsh1sa3oa1hawl24aylzyngb1gji059dwg4nztb1iuegdaq8jp151zojll1v1g63qvclzdv20rh2ylbqiq5gxgshx5zok8atuvsc13ifzau05zwsqab1y012uqd1dnjaksm43sug5wuehe1981qjnoe8prwrl9ng5e65m4lpuguvg6b5lk2i17nbdjxrkngxdxonovcyvhov9uqyitnqqf10rqhdkkva2kox2gqz5zwj0bgn34zxe6zeau6vpno5gwtskwshmflqxc0s2pcieu8v0cdfu28yx4mnopop89pg0eiimliawft4wh0b4m7h8cpu8hjzoa8mfnrek80jb2e88pshu9c7msigtkx0232l5xny03gtlxb870nuaac66t1g3lm0o8a5fgtewtdnwttugw26ecy3b4zntbuy1tley81gvp0z6qxty5a4jz5j3rnwxnhhwzne50llsbzlyyw3p54muj67qjfwvw6qqyt3v7fbm8vf46ee7a8awr4p == \4\w\q\h\2\2\0\h\n\s\d\i\5\2\6\d\e\g\e\r\o\g\8\d\1\y\v\8\5\t\r\i\4\b\v\o\c\n\h\6\o\s\q\q\5\0\k\i\6\y\p\b\k\n\x\v\q\z\5\u\3\3\p\s\p\i\c\n\r\y\2\n\q\4\o\b\y\n\e\6\9\g\a\q\j\s\t\8\n\m\4\v\s\l\1\q\e\b\d\p\w\p\p\h\m\s\a\f\b\9\f\n\5\o\o\x\g\o\h\d\x\k\4\8\i\5\v\z\y\n\c\b\g\2\e\u\9\4\g\d\r\z\l\s\8\5\v\j\j\5\8\2\3\f\8\h\j\8\4\r\y\2\5\x\g\i\4\x\m\a\t\p\u\p\q\o\9\6\x\b\p\s\4\w\s\r\x\t\f\a\h\b\g\8\y\5\y\a\8\n\g\1\5\c\q\1\9\k\j\y\s\v\l\w\0\d\7\y\i\x\r\z\n\k\q\h\g\r\8\n\1\b\n\9\0\k\o\e\h\s\d\g\y\5\e\v\8\y\8\l\g\x\k\q\4\l\i\z\5\k\f\2\s\3\o\q\u\l\q\y\f\6\f\6\g\j\w\3\f\s\4\i\z\n\p\l\l\0\l\a\j\1\1\g\1\k\m\w\2\z\l\z\x\k\4\o\s\p\i\p\d\p\1\0\v\p\p\l\o\j\c\f\n\9\o\z\x\7\m\6\i\b\l\3\t\t\i\v\g\f\p\p\p\0\2\w\l\u\i\a\x\b\9\k\y\w\w\4\y\6\p\s\2\r\r\f\a\j\w\f\f\9\w\9\n\r\9\6\0\z\a\j\l\f\l\c\q\s\t\k\b\q\w\2\7\4\1\w\j\q\s\r\1\x\d\v\i\q\0\2\x\x\j\p\c\k\4\h\e\u\r\s\b\4\1\q\w\0\v\f\v\3\0\z\7\q\u\c\b\e\x\2\4\t\9\7\6\w\6\4\p\c\n\e\u\h\q\a\q\p\c\m\r\x\q\b\i\9\o\n\z\1\4\j\1\t\x\u\9\a\g\v\2\5\h\y\j\c\m\t\d\5\u\1\3\e\b\y\c\o\6\4\i\5\s\t\v\j\m\c\5\c\9\n\i\c\y\3\v\k\n\a\u\a\r\h\b\s\q\0\x\q\w\y\z\l\i\4\2\p\v\t\g\a\m\0\0\e\f\p\y\0\r\8\i\4\o\d\u\x\r\k\s\8\z\y\7\2\o\4\f\z\j\h\g\0\j\a\v\u\w\5\2\9\f\9\c\z\r\n\m\7\1\n\l\u\q\j\7\u\n\t\n\4\w\t\d\3\p\3\0\6\f\d\n\b\k\k\w\n\n\h\p\q\6\h\l\t\f\9\c\t\k\1\n\5\d\b\i\g\g\6\n\4\p\f\1\y\d\7\h\s\e\8\9\b\f\1\8\f\y\w\x\z\j\3\5\k\f\0\n\j\p\s\k\a\u\5\w\d\s\7\m\n\s\x\m\0\r\z\3\3\c\q\w\s\z\s\x\5\i\v\a\2\w\t\4\i\x\j\x\i\p\z\v\x\1\1\8\5\g\n\6\f\w\f\i\l\a\p\w\s\z\o\a\q\8\r\x\w\b\5\n\w\w\j\k\4\h\0\n\7\h\i\h\q\y\1\8\v\s\p\b\v\c\c\b\r\5\u\y\b\2\x\y\e\d\d\8\5\o\m\x\4\p\2\z\q\u\u\e\6\d\x\r\4\c\j\f\j\c\3\r\4\l\o\8\d\q\a\z\m\c\o\7\w\j\x\4\n\i\7\n\e\x\y\s\5\f\l\s\9\i\n\a\h\j\9\5\s\0\2\b\t\a\b\4\l\2\c\c\5\o\l\h\z\7\f\0\n\u\v\a\5\f\e\w\w\y\c\r\i\o\3\8\r\f\f\v\h\i\t\f\b\i\t\2\w\c\u\n\v\a\m\9\f\o\g\0\z\e\7\c\i\a\d\p\7\y\k\d\0\t\p\g\8\m\z\b\n\k\p\p\j\m\9\7\v\v\9\n\o\2\k\2\n\q\s\6\v\5\2\e\0\t\x\o\9\m\o\a\5\c\4\c\l\x\1\d\p\1\3\r\s\v\c\h\x\a\a\y\c\j\2\n\6\u\e\0\e\3\e\c\3\j\d\t\t\h\n\3\j\9\m\6\w\l\j\n\6\i\j\y\l\a\g\l\5\u\x\6\g\o\1\d\6\d\t\l\c\m\7\k\r\g\5\7\m\o\0\e\8\v\1\6\m\h\i\h\a\u\8\j\m\w\x\u\5\b\h\p\8\n\1\h\2\1\c\f\p\k\8\l\i\0\7\j\i\n\k\g\n\p\5\p\6\6\q\f\e\q\o\2\q\1\4\t\t\u\0\4\0\u\s\p\x\0\t\e\v\z\o\f\l\p\4\8\9\h\l\w\6\6\y\e\g\u\q\t\f\9\f\t\b\t\i\h\c\d\t\w\k\z\g\m\n\x\7\v\j\d\t\h\y\n\o\3\2\a\b\4\1\b\j\d\b\o\v\5\g\3\t\j\u\n\h\h\n\y\2\e\x\b\o\8\3\c\6\9\u\s\s\q\k\n\n\5\u\3\5\9\6\o\a\t\l\4\h\j\m\r\c\d\6\7\w\4\8\b\3\y\n\z\3\a\3\g\0\6\3\z\t\0\8\s\e\b\u\w\k\9\1\a\3\f\6\f\v\f\l\s\h\7\k\l\z\l\q\2\a\w\l\e\6\n\7\1\q\3\g\3\g\6\6\2\z\b\m\b\k\k\a\z\l\k\d\m\r\k\v\j\q\l\m\f\d\t\x\e\0\r\d\a\7\b\5\o\c\k\e\2\x\m\9\a\t\a\3\5\1\8\6\h\w\f\l\j\e\p\i\5\a\0\9\k\q\4\i\o\e\f\m\i\t\o\f\v\j\t\p\i\u\w\k\i\e\x\x\3\s\c\p\0\o\8\u\b\f\a\c\x\c\5\6\f\v\g\2\h\e\6\9\d\z\b\c\7\k\f\7\z\e\x\8\l\l\e\7\b\k\q\m\k\6\j\k\7\0\a\j\t\x\c\x\9\w\d\o\z\j\r\y\n\a\f\5\l\i\v\5\q\f\t\b\0\i\i\2\l\i\f\b\p\d\5\p\o\k\e\g\o\7\o\o\b\d\3\x\l\h\j\o\n\j\f\c\6\y\k\b\t\s\a\u\b\f\j\6\o\t\x\s\1\b\5\c\c\8\9\y\o\q\r\i\7\t\7\r\x\s\n\x\q\r\o\5\y\t\1\l\x\f\c\2\l\z\p\t\8\b\f\a\b\8\z\l\h\b\o\0\c\5\r\t\8\3\z\l\4\v\e\j\k\3\5\s\7\x\y\l\k\e\p\4\g\5\e\h\t\v\e\u\o\n\d\c\z\p\9\y\e\e\4\j\z\y\6\w\3\b\e\w\t\o\h\n\n\n\q\n\m\v\z\r\e\6\i\t\w\a\c\8\c\s\f\l\6\z\4\h\v\3\k\r\q\2\5\e\d\w\f\d\w\f\k\7\4\d\t\h\x\z\x\4\y\e\p\s\g\c\7\u\f\z\r\2\m\6\8\z\u\8\7\v\s\p\f\9\v\f\v\q\z\3\w\3\p\j\a\8\q\b\a\1\w\i\q\b\g\b\k\0\t\7\z\1\x\q\v\6\5\b\7\a\2\i\p\3\f\r\r\a\x\4\a\g\m\m\k\9\7\h\a\a\v\9\6\9\0\a\8\y\p\a\a\6\u\j\d\e\r\a\1\9\d\l\0\f\m\f\a\w\j\k\7\9\i\t\d\v\g\u\2\5\l\2\v\4\u\x\o\a\6\1\r\4\b\s\q\u\6\j\0\r\p\j\e\6\6\z\7\q\t\l\k\8\w\6\x\w\o\3\g\g\4\h\9\4\s\m\a\k\g\p\w\l\7\i\s\f\q\b\z\j\q\n\j\y\j\t\n\9\k\0\m\c\b\o\e\y\6\l\a\k\g\z\i\f\9\e\1\1\t\o\2\n\4\z\7\i\g\y\r\x\j\g\p\h\u\7\9\7\g\n\y\q\2\i\u\g\l\i\j\s\i\j\p\4\r\2\h\3\w\i\w\a\h\5\y\p\v\p\q\h\r\9\h\u\z\j\g\i\9\p\e\u\u\d\b\i\s\q\7\3\x\2\7\7\v\o\x\0\7\a\m\r\5\k\t\w\r\q\p\1\b\y\o\u\n\z\9\v\v\d\r\3\e\o\u\h\j\x\n\4\s\s\s\t\c\j\q\o\4\b\e\o\f\k\n\y\p\i\m\v\m\v\g\m\m\c\j\h\o\9\b\a\i\p\f\u\c\0\6\u\9\j\p\t\k\u\9\p\s\8\l\w\q\w\u\m\e\k\z\f\a\d\d\i\d\i\f\6\o\i\3\2\e\r\j\u\z\8\h\h\6\7\l\n\w\w\q\u\u\u\f\m\d\x\i\g\v\4\e\i\t\k\m\p\q\p\s\s\b\j\o\o\y\7\u\a\5\v\m\o\4\x\6\t\n\b\1\0\4\p\n\1\c\c\0\3\l\u\h\c\7\c\9\u\8\d\t\8\2\z\3\j\c\s\s\m\x\o\o\l\4\8\f\y\4\r\x\5\b\s\z\b\r\6\f\9\h\5\v\n\5\l\b\9\f\c\a\7\w\w\h\0\u\t\m\u\l\0\v\m\c\1\i\u\b\r\5\c\w\6\l\i\w\g\5\3\5\d\2\7\i\d\8\o\g\n\c\6\8\c\8\b\7\6\x\1\3\i\x\y\v\z\d\o\c\g\d\y\8\8\g\s\g\l\3\0\x\b\n\n\c\q\w\u\j\g\4\m\5\s\d\7\5\c\d\6\0\e\7\1\g\w\s\4\p\9\z\4\8\l\5\u\g\g\8\y\j\u\f\q\n\o\s\f\r\h\c\0\p\7\h\t\2\y\9\u\0\z\7\p\r\g\5\l\0\j\1\9\z\o\b\3\r\z\5\s\o\c\p\4\n\o\h\p\q\u\s\b\3\1\2\h\z\y\h\b\3\8\h\6\t\o\0\h\p\u\z\g\s\4\o\q\a\a\x\y\9\r\j\z\z\c\h\x\8\d\1\b\o\j\y\h\4\p\1\5\n\v\x\v\l\1\y\1\7\2\t\1\u\k\3\8\g\g\j\2\6\a\f\k\j\h\8\e\c\s\8\t\5\1\w\4\1\b\7\j\k\7\p\i\e\p\3\7\n\m\z\1\f\p\1\y\y\v\2\7\5\0\3\6\2\q\e\v\e\1\d\f\e\k\q\j\6\w\d\j\8\3\s\4\8\z\s\3\x\2\s\3\a\o\f\e\g\5\9\c\w\1\9\w\y\y\b\7\v\t\d\h\a\6\e\u\r\k\0\a\1\2\9\y\i\t\n\6\q\k\v\o\8\u\1\v\e\2\u\c\8\h\7\l\l\i\u\g\d\t\3\r\j\b\2\0\t\k\7\7\y\d\1\w\b\t\o\n\u\9\v\5\9\q\j\b\5\s\p\w\1\p\f\w\k\b\g\4\j\d\y\6\p\y\b\u\b\p\8\z\4\3\n\m\t\0\o\d\z\4\9\m\b\w\2\z\u\2\u\z\2\5\d\x\c\n\0\9\f\r\v\5\d\8\6\f\i\z\d\o\b\c\u\v\b\r\k\6\y\v\r\t\g\9\s\v\a\9\a\o\i\i\x\1\g\y\h\n\7\f\8\0\x\0\e\x\n\c\u\y\v\k\u\t\2\e\z\t\l\o\x\d\w\y\p\t\d\f\r\w\x\r\n\x\a\n\w\h\0\x\y\x\g\5\w\g\5\8\0\q\n\d\8\s\x\y\4\a\d\0\p\w\8\v\g\5\2\i\6\4\u\i\l\p\c\j\h\v\f\u\n\3\e\m\6\7\d\d\r\z\s\w\s\6\0\h\4\b\0\i\m\g\a\8\h\u\9\7\h\0\e\1\7\t\6\8\d\x\k\o\r\h\x\y\f\t\6\q\2\0\o\s\r\u\m\w\j\v\7\g\c\f\o\k\8\p\f\f\7\4\d\a\a\8\g\v\b\0\f\t\6\f\q\l\0\a\i\l\s\y\y\v\5\5\5\i\a\o\7\l\6\c\h\b\i\d\n\8\0\e\u\p\f\h\o\d\3\h\t\8\3\l\r\q\z\r\8\6\w\t\a\t\w\6\l\3\n\t\n\6\t\4\u\z\m\n\8\p\r\s\8\z\n\8\w\0\n\h\5\9\p\n\a\d\z\5\e\f\j\e\t\1\2\7\n\1\q\c\q\7\k\r\l\e\w\6\e\v\d\s\t\1\s\t\i\o\i\t\z\k\u\w\o\0\1\d\v\4\v\d\1\s\w\2\2\k\6\r\o\u\v\e\x\m\5\j\f\9\k\a\9\7\k\7\h\z\r\3\0\u\o\q\m\5\f\l\z\m\y\i\d\8\q\v\z\d\l\e\x\r\e\e\y\p\9\z\8\4\k\8\c\5\u\z\h\2\4\1\k\j\c\p\x\2\o\f\1\8\n\7\3\3\s\9\m\v\m\i\e\i\v\6\k\a\e\x\z\u\7\o\m\2\u\2\s\m\q\7\e\s\n\8\5\t\m\2\m\5\h\8\w\w\b\2\3\p\w\t\v\f\x\r\o\2\p\f\s\f\9\u\q\r\7\7\z\s\b\h\t\7\c\i\j\i\m\w\n\9\p\o\d\q\a\j\j\6\b\o\u\d\v\g\7\6\n\p\5\5\0\q\q\k\m\r\e\0\t\t\y\9\2\r\b\z\4\o\9\s\g\i\s\8\3\d\l\q\0\1\9\u\u\z\j\m\a\3\n\c\8\8\w\0\0\z\f\g\l\i\s\a\m\z\1\7\p\q\p\f\p\v\k\b\7\5\u\l\t\4\c\v\o\b\o\i\g\p\b\a\1\6\t\c\t\e\0\p\l\s\7\b\t\2\q\f\i\t\f\1\i\4\g\k\z\0\x\x\6\o\q\c\v\n\w\7\d\j\a\f\z\s\6\9\8\h\1\1\b\s\6\q\h\5\8\c\t\i\q\d\e\x\0\f\p\x\f\v\p\y\i\m\8\s\4\v\4\k\f\p\1\0\c\w\4\s\s\l\u\h\n\h\l\s\a\x\x\f\x\w\i\3\9\t\o\7\z\q\c\i\x\f\t\2\n\5\3\i\4\9\2\2\r\x\l\6\l\w\t\x\p\5\p\o\3\d\q\i\e\6\h\d\p\1\k\x\9\p\e\z\7\e\w\y\o\h\n\y\m\q\9\2\i\z\2\l\o\9\m\q\h\t\v\1\g\6\p\z\2\0\c\w\t\i\p\d\c\z\e\d\r\0\s\j\1\q\o\r\w\b\1\l\q\z\h\2\m\o\h\4\n\5\e\i\u\e\b\s\8\w\p\6\i\h\b\i\d\o\j\4\0\3\a\d\y\9\x\i\v\f\5\g\1\t\i\n\7\l\6\d\n\h\x\7\8\7\b\s\7\f\u\v\p\5\t\9\l\a\w\1\z\2\n\0\t\y\v\q\5\7\l\l\8\9\c\b\b\c\4\3\f\d\5\g\x\r\2\m\e\r\d\y\9\b\x\b\t\k\u\e\r\a\h\w\r\t\3\9\7\1\8\a\5\m\a\v\b\i\3\i\b\m\3\y\d\m\e\5\6\d\e\7\1\k\s\g\u\7\0\s\v\a\4\f\w\y\y\k\q\7\3\o\d\d\1\x\y\a\c\t\w\z\l\4\p\m\1\e\i\9\z\6\f\c\f\2\t\j\u\0\d\i\6\1\m\4\m\i\k\s\g\j\7\1\9\5\8\y\x\o\q\u\h\r\w\k\p\m\l\p\3\s\2\c\3\w\x\s\h\1\s\a\3\o\a\1\h\a\w\l\2\4\a\y\l\z\y\n\g\b\1\g\j\i\0\5\9\d\w\g\4\n\z\t\b\1\i\u\e\g\d\a\q\8\j\p\1\5\1\z\o\j\l\l\1\v\1\g\6\3\q\v\c\l\z\d\v\2\0\r\h\2\y\l\b\q\i\q\5\g\x\g\s\h\x\5\z\o\k\8\a\t\u\v\s\c\1\3\i\f\z\a\u\0\5\z\w\s\q\a\b\1\y\0\1\2\u\q\d\1\d\n\j\a\k\s\m\4\3\s\u\g\5\w\u\e\h\e\1\9\8\1\q\j\n\o\e\8\p\r\w\r\l\9\n\g\5\e\6\5\m\4\l\p\u\g\u\v\g\6\b\5\l\k\2\i\1\7\n\b\d\j\x\r\k\n\g\x\d\x\o\n\o\v\c\y\v\h\o\v\9\u\q\y\i\t\n\q\q\f\1\0\r\q\h\d\k\k\v\a\2\k\o\x\2\g\q\z\5\z\w\j\0\b\g\n\3\4\z\x\e\6\z\e\a\u\6\v\p\n\o\5\g\w\t\s\k\w\s\h\m\f\l\q\x\c\0\s\2\p\c\i\e\u\8\v\0\c\d\f\u\2\8\y\x\4\m\n\o\p\o\p\8\9\p\g\0\e\i\i\m\l\i\a\w\f\t\4\w\h\0\b\4\m\7\h\8\c\p\u\8\h\j\z\o\a\8\m\f\n\r\e\k\8\0\j\b\2\e\8\8\p\s\h\u\9\c\7\m\s\i\g\t\k\x\0\2\3\2\l\5\x\n\y\0\3\g\t\l\x\b\8\7\0\n\u\a\a\c\6\6\t\1\g\3\l\m\0\o\8\a\5\f\g\t\e\w\t\d\n\w\t\t\u\g\w\2\6\e\c\y\3\b\4\z\n\t\b\u\y\1\t\l\e\y\8\1\g\v\p\0\z\6\q\x\t\y\5\a\4\j\z\5\j\3\r\n\w\x\n\h\h\w\z\n\e\5\0\l\l\s\b\z\l\y\y\w\3\p\5\4\m\u\j\6\7\q\j\f\w\v\w\6\q\q\y\t\3\v\7\f\b\m\8\v\f\4\6\e\e\7\a\8\a\w\r\4\p ]] 00:25:42.035 ************************************ 00:25:42.035 END TEST dd_rw_offset 00:25:42.035 ************************************ 00:25:42.035 00:25:42.035 real 0m4.372s 00:25:42.035 user 0m3.585s 00:25:42.035 sys 0m0.501s 00:25:42.035 23:41:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:42.035 23:41:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:25:42.035 23:41:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:25:42.035 23:41:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:25:42.035 23:41:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:42.035 23:41:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:25:42.035 23:41:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:25:42.035 23:41:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:25:42.035 23:41:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:25:42.035 23:41:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:42.035 23:41:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:25:42.035 23:41:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:25:42.035 23:41:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:25:42.035 { 00:25:42.035 "subsystems": [ 00:25:42.035 { 00:25:42.035 "subsystem": "bdev", 00:25:42.035 "config": [ 00:25:42.035 { 00:25:42.035 "params": { 00:25:42.035 "trtype": "pcie", 00:25:42.035 "name": "Nvme0", 00:25:42.035 "traddr": "0000:00:10.0" 00:25:42.035 }, 00:25:42.035 "method": "bdev_nvme_attach_controller" 00:25:42.035 }, 00:25:42.035 { 00:25:42.035 "method": "bdev_wait_for_examine" 00:25:42.035 } 00:25:42.035 ] 00:25:42.035 } 00:25:42.035 ] 00:25:42.035 } 00:25:42.035 [2024-05-14 23:41:05.138640] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:42.035 [2024-05-14 23:41:05.138842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78190 ] 00:25:42.035 [2024-05-14 23:41:05.292201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.326 [2024-05-14 23:41:05.511329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.266  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:44.266 00:25:44.266 23:41:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:44.266 00:25:44.266 real 0m53.720s 00:25:44.266 user 0m43.668s 00:25:44.266 sys 0m6.009s 00:25:44.266 ************************************ 00:25:44.266 END TEST spdk_dd_basic_rw 00:25:44.266 ************************************ 00:25:44.266 23:41:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:44.266 23:41:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:25:44.266 23:41:07 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:44.266 23:41:07 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:44.266 23:41:07 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:44.266 23:41:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:25:44.266 ************************************ 00:25:44.266 START TEST spdk_dd_posix 00:25:44.266 ************************************ 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:44.266 * Looking for test storage... 00:25:44.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:25:44.266 * First test run, using AIO 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:25:44.266 ************************************ 00:25:44.266 START TEST dd_flag_append 00:25:44.266 ************************************ 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=12p43sauyrygp81c80ksrhp91av5rx81 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=89liqldcnc4fus60a4t28fx48ocefrx7 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 12p43sauyrygp81c80ksrhp91av5rx81 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 89liqldcnc4fus60a4t28fx48ocefrx7 00:25:44.266 23:41:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:44.267 [2024-05-14 23:41:07.446515] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:44.267 [2024-05-14 23:41:07.446766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78283 ] 00:25:44.525 [2024-05-14 23:41:07.596734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.525 [2024-05-14 23:41:07.800454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.466  Copying: 32/32 [B] (average 31 kBps) 00:25:46.466 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 89liqldcnc4fus60a4t28fx48ocefrx712p43sauyrygp81c80ksrhp91av5rx81 == \8\9\l\i\q\l\d\c\n\c\4\f\u\s\6\0\a\4\t\2\8\f\x\4\8\o\c\e\f\r\x\7\1\2\p\4\3\s\a\u\y\r\y\g\p\8\1\c\8\0\k\s\r\h\p\9\1\a\v\5\r\x\8\1 ]] 00:25:46.466 00:25:46.466 real 0m2.097s 00:25:46.466 user 0m1.640s 00:25:46.466 sys 0m0.253s 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 ************************************ 00:25:46.466 END TEST dd_flag_append 00:25:46.466 ************************************ 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 ************************************ 00:25:46.466 START TEST dd_flag_directory 00:25:46.466 ************************************ 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:46.466 23:41:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:46.466 [2024-05-14 23:41:09.588946] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:46.466 [2024-05-14 23:41:09.589152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78337 ] 00:25:46.466 [2024-05-14 23:41:09.741174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.725 [2024-05-14 23:41:09.934430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.293 [2024-05-14 23:41:10.284321] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:47.293 [2024-05-14 23:41:10.284429] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:47.293 [2024-05-14 23:41:10.284478] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:47.860 [2024-05-14 23:41:11.086234] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:25:48.427 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:25:48.427 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:48.427 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:25:48.427 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:25:48.427 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:25:48.427 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:48.427 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:48.428 23:41:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.428 [2024-05-14 23:41:11.593472] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:48.428 [2024-05-14 23:41:11.593687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78364 ] 00:25:48.686 [2024-05-14 23:41:11.746287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.686 [2024-05-14 23:41:11.954908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.253 [2024-05-14 23:41:12.298877] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:49.253 [2024-05-14 23:41:12.298957] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:49.253 [2024-05-14 23:41:12.298992] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:50.189 [2024-05-14 23:41:13.113769] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:25:50.189 ************************************ 00:25:50.189 END TEST dd_flag_directory 00:25:50.189 ************************************ 00:25:50.189 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:25:50.189 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:50.189 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:25:50.189 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:25:50.189 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:25:50.189 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:50.189 00:25:50.189 real 0m4.025s 00:25:50.189 user 0m3.159s 00:25:50.189 sys 0m0.470s 00:25:50.189 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:50.189 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:25:50.448 ************************************ 00:25:50.448 START TEST dd_flag_nofollow 00:25:50.448 ************************************ 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:50.448 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.449 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:50.449 23:41:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:50.449 [2024-05-14 23:41:13.664609] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:50.449 [2024-05-14 23:41:13.664795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78418 ] 00:25:50.708 [2024-05-14 23:41:13.827392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.967 [2024-05-14 23:41:14.039243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.226 [2024-05-14 23:41:14.390554] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:51.226 [2024-05-14 23:41:14.390646] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:51.226 [2024-05-14 23:41:14.390694] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:52.162 [2024-05-14 23:41:15.266163] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:52.421 23:41:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:52.680 [2024-05-14 23:41:15.801176] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:52.680 [2024-05-14 23:41:15.801381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78446 ] 00:25:52.680 [2024-05-14 23:41:15.959339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.938 [2024-05-14 23:41:16.178665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.505 [2024-05-14 23:41:16.570750] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:53.505 [2024-05-14 23:41:16.570844] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:53.505 [2024-05-14 23:41:16.570881] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:54.478 [2024-05-14 23:41:17.436563] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:25:54.763 23:41:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:54.763 [2024-05-14 23:41:17.941412] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:54.763 [2024-05-14 23:41:17.941622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78473 ] 00:25:55.023 [2024-05-14 23:41:18.093026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.023 [2024-05-14 23:41:18.307209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.969  Copying: 512/512 [B] (average 500 kBps) 00:25:56.969 00:25:56.969 ************************************ 00:25:56.969 END TEST dd_flag_nofollow 00:25:56.969 ************************************ 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ uv7plmn295mqeyivzy8s3cvg51oh8t7biouqiagk2j5z1ucv91j8hnzh679n0k1uquan55eqd827kv8oft4l6ur5arjof4c44mifbamg6dcm9c5n52e3sgwltyg4tq07qlrnxf740dgtwdrz3t18dfqsynoce1roepxi1gvhdhy1cax8d62kfiae5ucki0mfj6g7yplwm3wlix2e8bl08pjzaivm3rsh87893bol4mb7p5kaanm8bgxwhmnj9urfr4jdxjkgaxw84q7kh5byveoayzmge5nxv7z7bf9ofrsvsfub9i3jj1gspb652s5ptwwytlsfyaytdhm31ctu01tltlwgj0x34s60ud9cwaogwkk9wyb4bms742hdpk22014b1z179kjuiwy8hq0hif0phx3l2iazz1k3fnhdsxwdjt9mru8oi2th24jlzbkil6i70i5r837tq847i68s9h8f7rpbfsmazak41hvcks8qczq2fvihiw7mhuepkti3 == \u\v\7\p\l\m\n\2\9\5\m\q\e\y\i\v\z\y\8\s\3\c\v\g\5\1\o\h\8\t\7\b\i\o\u\q\i\a\g\k\2\j\5\z\1\u\c\v\9\1\j\8\h\n\z\h\6\7\9\n\0\k\1\u\q\u\a\n\5\5\e\q\d\8\2\7\k\v\8\o\f\t\4\l\6\u\r\5\a\r\j\o\f\4\c\4\4\m\i\f\b\a\m\g\6\d\c\m\9\c\5\n\5\2\e\3\s\g\w\l\t\y\g\4\t\q\0\7\q\l\r\n\x\f\7\4\0\d\g\t\w\d\r\z\3\t\1\8\d\f\q\s\y\n\o\c\e\1\r\o\e\p\x\i\1\g\v\h\d\h\y\1\c\a\x\8\d\6\2\k\f\i\a\e\5\u\c\k\i\0\m\f\j\6\g\7\y\p\l\w\m\3\w\l\i\x\2\e\8\b\l\0\8\p\j\z\a\i\v\m\3\r\s\h\8\7\8\9\3\b\o\l\4\m\b\7\p\5\k\a\a\n\m\8\b\g\x\w\h\m\n\j\9\u\r\f\r\4\j\d\x\j\k\g\a\x\w\8\4\q\7\k\h\5\b\y\v\e\o\a\y\z\m\g\e\5\n\x\v\7\z\7\b\f\9\o\f\r\s\v\s\f\u\b\9\i\3\j\j\1\g\s\p\b\6\5\2\s\5\p\t\w\w\y\t\l\s\f\y\a\y\t\d\h\m\3\1\c\t\u\0\1\t\l\t\l\w\g\j\0\x\3\4\s\6\0\u\d\9\c\w\a\o\g\w\k\k\9\w\y\b\4\b\m\s\7\4\2\h\d\p\k\2\2\0\1\4\b\1\z\1\7\9\k\j\u\i\w\y\8\h\q\0\h\i\f\0\p\h\x\3\l\2\i\a\z\z\1\k\3\f\n\h\d\s\x\w\d\j\t\9\m\r\u\8\o\i\2\t\h\2\4\j\l\z\b\k\i\l\6\i\7\0\i\5\r\8\3\7\t\q\8\4\7\i\6\8\s\9\h\8\f\7\r\p\b\f\s\m\a\z\a\k\4\1\h\v\c\k\s\8\q\c\z\q\2\f\v\i\h\i\w\7\m\h\u\e\p\k\t\i\3 ]] 00:25:56.969 00:25:56.969 real 0m6.329s 00:25:56.969 user 0m5.045s 00:25:56.969 sys 0m0.684s 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:25:56.969 ************************************ 00:25:56.969 START TEST dd_flag_noatime 00:25:56.969 ************************************ 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:25:56.969 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:25:56.970 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:56.970 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1715730078 00:25:56.970 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:56.970 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1715730079 00:25:56.970 23:41:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:25:57.906 23:41:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:57.906 [2024-05-14 23:41:21.054399] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:57.906 [2024-05-14 23:41:21.054575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78537 ] 00:25:58.166 [2024-05-14 23:41:21.214138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.166 [2024-05-14 23:41:21.427515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.678  Copying: 512/512 [B] (average 500 kBps) 00:25:59.678 00:25:59.937 23:41:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:59.937 23:41:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1715730078 )) 00:25:59.937 23:41:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:59.937 23:41:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1715730079 )) 00:25:59.937 23:41:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:59.938 [2024-05-14 23:41:23.113099] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:25:59.938 [2024-05-14 23:41:23.113547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78569 ] 00:26:00.196 [2024-05-14 23:41:23.287792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.455 [2024-05-14 23:41:23.517766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.090  Copying: 512/512 [B] (average 500 kBps) 00:26:02.090 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:02.090 ************************************ 00:26:02.090 END TEST dd_flag_noatime 00:26:02.090 ************************************ 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1715730083 )) 00:26:02.090 00:26:02.090 real 0m5.212s 00:26:02.090 user 0m3.314s 00:26:02.090 sys 0m0.494s 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:26:02.090 ************************************ 00:26:02.090 START TEST dd_flags_misc 00:26:02.090 ************************************ 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:02.090 23:41:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:02.090 [2024-05-14 23:41:25.303385] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:02.090 [2024-05-14 23:41:25.303580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78620 ] 00:26:02.349 [2024-05-14 23:41:25.467581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.607 [2024-05-14 23:41:25.678896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.245  Copying: 512/512 [B] (average 500 kBps) 00:26:04.245 00:26:04.245 23:41:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4emzui4w4hxgt1h5nop1kvtwhfljn8thlaobonak9ptdbedw85stkigi444tjjli2q9slqcg0gmr3gy1138rmroniagpjihq1lclugjbtbx1ncrinudkerovwqhhbvvzbbxoq364ndms2kymy2g8j6jb301xjlybfohhirt0vdh4dthckr00z0vc4mgyuwjajuyk23kt3xdamwtprec0u3j2x5m7y5elcms14vy0px45ufwcedpba92t9i9v6r7g4kma73q2c6mo1i53oy1p01a9cmrz0x9f5srq1ugu63ljctafz351n4h8ipslcocvt72phihdtbuaki2mam0d97frd92r9vds35y4ttnf9ppljbgu7721rvvpdcxzmsmi8kyfaclabmya66p18bewoshjahdopiroezii76o43atnrurxxo1rmvcf8c419904niazt4yrc1fiftw6ouf50diklvn5c57cndptj7n5ea41urndfutfrh1xwcg3ilpj == \4\e\m\z\u\i\4\w\4\h\x\g\t\1\h\5\n\o\p\1\k\v\t\w\h\f\l\j\n\8\t\h\l\a\o\b\o\n\a\k\9\p\t\d\b\e\d\w\8\5\s\t\k\i\g\i\4\4\4\t\j\j\l\i\2\q\9\s\l\q\c\g\0\g\m\r\3\g\y\1\1\3\8\r\m\r\o\n\i\a\g\p\j\i\h\q\1\l\c\l\u\g\j\b\t\b\x\1\n\c\r\i\n\u\d\k\e\r\o\v\w\q\h\h\b\v\v\z\b\b\x\o\q\3\6\4\n\d\m\s\2\k\y\m\y\2\g\8\j\6\j\b\3\0\1\x\j\l\y\b\f\o\h\h\i\r\t\0\v\d\h\4\d\t\h\c\k\r\0\0\z\0\v\c\4\m\g\y\u\w\j\a\j\u\y\k\2\3\k\t\3\x\d\a\m\w\t\p\r\e\c\0\u\3\j\2\x\5\m\7\y\5\e\l\c\m\s\1\4\v\y\0\p\x\4\5\u\f\w\c\e\d\p\b\a\9\2\t\9\i\9\v\6\r\7\g\4\k\m\a\7\3\q\2\c\6\m\o\1\i\5\3\o\y\1\p\0\1\a\9\c\m\r\z\0\x\9\f\5\s\r\q\1\u\g\u\6\3\l\j\c\t\a\f\z\3\5\1\n\4\h\8\i\p\s\l\c\o\c\v\t\7\2\p\h\i\h\d\t\b\u\a\k\i\2\m\a\m\0\d\9\7\f\r\d\9\2\r\9\v\d\s\3\5\y\4\t\t\n\f\9\p\p\l\j\b\g\u\7\7\2\1\r\v\v\p\d\c\x\z\m\s\m\i\8\k\y\f\a\c\l\a\b\m\y\a\6\6\p\1\8\b\e\w\o\s\h\j\a\h\d\o\p\i\r\o\e\z\i\i\7\6\o\4\3\a\t\n\r\u\r\x\x\o\1\r\m\v\c\f\8\c\4\1\9\9\0\4\n\i\a\z\t\4\y\r\c\1\f\i\f\t\w\6\o\u\f\5\0\d\i\k\l\v\n\5\c\5\7\c\n\d\p\t\j\7\n\5\e\a\4\1\u\r\n\d\f\u\t\f\r\h\1\x\w\c\g\3\i\l\p\j ]] 00:26:04.245 23:41:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:04.245 23:41:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:04.245 [2024-05-14 23:41:27.403574] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:04.245 [2024-05-14 23:41:27.403765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78649 ] 00:26:04.504 [2024-05-14 23:41:27.554655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.504 [2024-05-14 23:41:27.770052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.448  Copying: 512/512 [B] (average 500 kBps) 00:26:06.448 00:26:06.448 23:41:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4emzui4w4hxgt1h5nop1kvtwhfljn8thlaobonak9ptdbedw85stkigi444tjjli2q9slqcg0gmr3gy1138rmroniagpjihq1lclugjbtbx1ncrinudkerovwqhhbvvzbbxoq364ndms2kymy2g8j6jb301xjlybfohhirt0vdh4dthckr00z0vc4mgyuwjajuyk23kt3xdamwtprec0u3j2x5m7y5elcms14vy0px45ufwcedpba92t9i9v6r7g4kma73q2c6mo1i53oy1p01a9cmrz0x9f5srq1ugu63ljctafz351n4h8ipslcocvt72phihdtbuaki2mam0d97frd92r9vds35y4ttnf9ppljbgu7721rvvpdcxzmsmi8kyfaclabmya66p18bewoshjahdopiroezii76o43atnrurxxo1rmvcf8c419904niazt4yrc1fiftw6ouf50diklvn5c57cndptj7n5ea41urndfutfrh1xwcg3ilpj == \4\e\m\z\u\i\4\w\4\h\x\g\t\1\h\5\n\o\p\1\k\v\t\w\h\f\l\j\n\8\t\h\l\a\o\b\o\n\a\k\9\p\t\d\b\e\d\w\8\5\s\t\k\i\g\i\4\4\4\t\j\j\l\i\2\q\9\s\l\q\c\g\0\g\m\r\3\g\y\1\1\3\8\r\m\r\o\n\i\a\g\p\j\i\h\q\1\l\c\l\u\g\j\b\t\b\x\1\n\c\r\i\n\u\d\k\e\r\o\v\w\q\h\h\b\v\v\z\b\b\x\o\q\3\6\4\n\d\m\s\2\k\y\m\y\2\g\8\j\6\j\b\3\0\1\x\j\l\y\b\f\o\h\h\i\r\t\0\v\d\h\4\d\t\h\c\k\r\0\0\z\0\v\c\4\m\g\y\u\w\j\a\j\u\y\k\2\3\k\t\3\x\d\a\m\w\t\p\r\e\c\0\u\3\j\2\x\5\m\7\y\5\e\l\c\m\s\1\4\v\y\0\p\x\4\5\u\f\w\c\e\d\p\b\a\9\2\t\9\i\9\v\6\r\7\g\4\k\m\a\7\3\q\2\c\6\m\o\1\i\5\3\o\y\1\p\0\1\a\9\c\m\r\z\0\x\9\f\5\s\r\q\1\u\g\u\6\3\l\j\c\t\a\f\z\3\5\1\n\4\h\8\i\p\s\l\c\o\c\v\t\7\2\p\h\i\h\d\t\b\u\a\k\i\2\m\a\m\0\d\9\7\f\r\d\9\2\r\9\v\d\s\3\5\y\4\t\t\n\f\9\p\p\l\j\b\g\u\7\7\2\1\r\v\v\p\d\c\x\z\m\s\m\i\8\k\y\f\a\c\l\a\b\m\y\a\6\6\p\1\8\b\e\w\o\s\h\j\a\h\d\o\p\i\r\o\e\z\i\i\7\6\o\4\3\a\t\n\r\u\r\x\x\o\1\r\m\v\c\f\8\c\4\1\9\9\0\4\n\i\a\z\t\4\y\r\c\1\f\i\f\t\w\6\o\u\f\5\0\d\i\k\l\v\n\5\c\5\7\c\n\d\p\t\j\7\n\5\e\a\4\1\u\r\n\d\f\u\t\f\r\h\1\x\w\c\g\3\i\l\p\j ]] 00:26:06.448 23:41:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:06.448 23:41:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:06.448 [2024-05-14 23:41:29.476077] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:06.448 [2024-05-14 23:41:29.476502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78683 ] 00:26:06.448 [2024-05-14 23:41:29.629484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.707 [2024-05-14 23:41:29.852423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.342  Copying: 512/512 [B] (average 166 kBps) 00:26:08.342 00:26:08.342 23:41:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4emzui4w4hxgt1h5nop1kvtwhfljn8thlaobonak9ptdbedw85stkigi444tjjli2q9slqcg0gmr3gy1138rmroniagpjihq1lclugjbtbx1ncrinudkerovwqhhbvvzbbxoq364ndms2kymy2g8j6jb301xjlybfohhirt0vdh4dthckr00z0vc4mgyuwjajuyk23kt3xdamwtprec0u3j2x5m7y5elcms14vy0px45ufwcedpba92t9i9v6r7g4kma73q2c6mo1i53oy1p01a9cmrz0x9f5srq1ugu63ljctafz351n4h8ipslcocvt72phihdtbuaki2mam0d97frd92r9vds35y4ttnf9ppljbgu7721rvvpdcxzmsmi8kyfaclabmya66p18bewoshjahdopiroezii76o43atnrurxxo1rmvcf8c419904niazt4yrc1fiftw6ouf50diklvn5c57cndptj7n5ea41urndfutfrh1xwcg3ilpj == \4\e\m\z\u\i\4\w\4\h\x\g\t\1\h\5\n\o\p\1\k\v\t\w\h\f\l\j\n\8\t\h\l\a\o\b\o\n\a\k\9\p\t\d\b\e\d\w\8\5\s\t\k\i\g\i\4\4\4\t\j\j\l\i\2\q\9\s\l\q\c\g\0\g\m\r\3\g\y\1\1\3\8\r\m\r\o\n\i\a\g\p\j\i\h\q\1\l\c\l\u\g\j\b\t\b\x\1\n\c\r\i\n\u\d\k\e\r\o\v\w\q\h\h\b\v\v\z\b\b\x\o\q\3\6\4\n\d\m\s\2\k\y\m\y\2\g\8\j\6\j\b\3\0\1\x\j\l\y\b\f\o\h\h\i\r\t\0\v\d\h\4\d\t\h\c\k\r\0\0\z\0\v\c\4\m\g\y\u\w\j\a\j\u\y\k\2\3\k\t\3\x\d\a\m\w\t\p\r\e\c\0\u\3\j\2\x\5\m\7\y\5\e\l\c\m\s\1\4\v\y\0\p\x\4\5\u\f\w\c\e\d\p\b\a\9\2\t\9\i\9\v\6\r\7\g\4\k\m\a\7\3\q\2\c\6\m\o\1\i\5\3\o\y\1\p\0\1\a\9\c\m\r\z\0\x\9\f\5\s\r\q\1\u\g\u\6\3\l\j\c\t\a\f\z\3\5\1\n\4\h\8\i\p\s\l\c\o\c\v\t\7\2\p\h\i\h\d\t\b\u\a\k\i\2\m\a\m\0\d\9\7\f\r\d\9\2\r\9\v\d\s\3\5\y\4\t\t\n\f\9\p\p\l\j\b\g\u\7\7\2\1\r\v\v\p\d\c\x\z\m\s\m\i\8\k\y\f\a\c\l\a\b\m\y\a\6\6\p\1\8\b\e\w\o\s\h\j\a\h\d\o\p\i\r\o\e\z\i\i\7\6\o\4\3\a\t\n\r\u\r\x\x\o\1\r\m\v\c\f\8\c\4\1\9\9\0\4\n\i\a\z\t\4\y\r\c\1\f\i\f\t\w\6\o\u\f\5\0\d\i\k\l\v\n\5\c\5\7\c\n\d\p\t\j\7\n\5\e\a\4\1\u\r\n\d\f\u\t\f\r\h\1\x\w\c\g\3\i\l\p\j ]] 00:26:08.342 23:41:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:08.342 23:41:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:08.342 [2024-05-14 23:41:31.617835] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:08.342 [2024-05-14 23:41:31.618015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78707 ] 00:26:08.601 [2024-05-14 23:41:31.774547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.859 [2024-05-14 23:41:31.991662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.494  Copying: 512/512 [B] (average 250 kBps) 00:26:10.494 00:26:10.494 23:41:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4emzui4w4hxgt1h5nop1kvtwhfljn8thlaobonak9ptdbedw85stkigi444tjjli2q9slqcg0gmr3gy1138rmroniagpjihq1lclugjbtbx1ncrinudkerovwqhhbvvzbbxoq364ndms2kymy2g8j6jb301xjlybfohhirt0vdh4dthckr00z0vc4mgyuwjajuyk23kt3xdamwtprec0u3j2x5m7y5elcms14vy0px45ufwcedpba92t9i9v6r7g4kma73q2c6mo1i53oy1p01a9cmrz0x9f5srq1ugu63ljctafz351n4h8ipslcocvt72phihdtbuaki2mam0d97frd92r9vds35y4ttnf9ppljbgu7721rvvpdcxzmsmi8kyfaclabmya66p18bewoshjahdopiroezii76o43atnrurxxo1rmvcf8c419904niazt4yrc1fiftw6ouf50diklvn5c57cndptj7n5ea41urndfutfrh1xwcg3ilpj == \4\e\m\z\u\i\4\w\4\h\x\g\t\1\h\5\n\o\p\1\k\v\t\w\h\f\l\j\n\8\t\h\l\a\o\b\o\n\a\k\9\p\t\d\b\e\d\w\8\5\s\t\k\i\g\i\4\4\4\t\j\j\l\i\2\q\9\s\l\q\c\g\0\g\m\r\3\g\y\1\1\3\8\r\m\r\o\n\i\a\g\p\j\i\h\q\1\l\c\l\u\g\j\b\t\b\x\1\n\c\r\i\n\u\d\k\e\r\o\v\w\q\h\h\b\v\v\z\b\b\x\o\q\3\6\4\n\d\m\s\2\k\y\m\y\2\g\8\j\6\j\b\3\0\1\x\j\l\y\b\f\o\h\h\i\r\t\0\v\d\h\4\d\t\h\c\k\r\0\0\z\0\v\c\4\m\g\y\u\w\j\a\j\u\y\k\2\3\k\t\3\x\d\a\m\w\t\p\r\e\c\0\u\3\j\2\x\5\m\7\y\5\e\l\c\m\s\1\4\v\y\0\p\x\4\5\u\f\w\c\e\d\p\b\a\9\2\t\9\i\9\v\6\r\7\g\4\k\m\a\7\3\q\2\c\6\m\o\1\i\5\3\o\y\1\p\0\1\a\9\c\m\r\z\0\x\9\f\5\s\r\q\1\u\g\u\6\3\l\j\c\t\a\f\z\3\5\1\n\4\h\8\i\p\s\l\c\o\c\v\t\7\2\p\h\i\h\d\t\b\u\a\k\i\2\m\a\m\0\d\9\7\f\r\d\9\2\r\9\v\d\s\3\5\y\4\t\t\n\f\9\p\p\l\j\b\g\u\7\7\2\1\r\v\v\p\d\c\x\z\m\s\m\i\8\k\y\f\a\c\l\a\b\m\y\a\6\6\p\1\8\b\e\w\o\s\h\j\a\h\d\o\p\i\r\o\e\z\i\i\7\6\o\4\3\a\t\n\r\u\r\x\x\o\1\r\m\v\c\f\8\c\4\1\9\9\0\4\n\i\a\z\t\4\y\r\c\1\f\i\f\t\w\6\o\u\f\5\0\d\i\k\l\v\n\5\c\5\7\c\n\d\p\t\j\7\n\5\e\a\4\1\u\r\n\d\f\u\t\f\r\h\1\x\w\c\g\3\i\l\p\j ]] 00:26:10.494 23:41:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:10.495 23:41:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:26:10.495 23:41:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:26:10.495 23:41:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:26:10.495 23:41:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:10.495 23:41:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:10.495 [2024-05-14 23:41:33.734665] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:10.495 [2024-05-14 23:41:33.734876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78739 ] 00:26:10.753 [2024-05-14 23:41:33.907274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.011 [2024-05-14 23:41:34.124296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.667  Copying: 512/512 [B] (average 500 kBps) 00:26:12.667 00:26:12.667 23:41:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5rfpt1eveztu3msz9gk3bwyce6z6b1ez0thb69bkkfybc5su4h9ki1zyk84cto6auwexbbrvzbgw9bnjrp1mgrxmpwlmmr2rbxermtujjf7gyh0r9j64ed0uv14qyyejsqmo1c7nggc5eb525izam3smv67popwd5a5t92q1nm2ykhs7ic0s2d9p5ly2l12i2wrpkvzzn9bghihp4bbr6tfo61t8cxca9on0kl7pj3y90puzyjj7jvj8mjos0vnj1u2tzhfldjqmwqaui5lmdjn3pji0ht79ua0di7u6slaioqnj0nk0orysq50draskqah8ujmnwekpb8k56h22b0cpts3zayxudayz0nuz2rkslkxgy27qi6mrh6r7p5ueqdsrhg0su309a9ie6lxj9fqkq5o3v3ur5bdne48kdgzannnrmte3mwb59fsntlzmnc15jyngvskfo1acrm3ddg5bxe72m2bsr75gg14km7i6ygqrpd9zu3uieo856d88 == \5\r\f\p\t\1\e\v\e\z\t\u\3\m\s\z\9\g\k\3\b\w\y\c\e\6\z\6\b\1\e\z\0\t\h\b\6\9\b\k\k\f\y\b\c\5\s\u\4\h\9\k\i\1\z\y\k\8\4\c\t\o\6\a\u\w\e\x\b\b\r\v\z\b\g\w\9\b\n\j\r\p\1\m\g\r\x\m\p\w\l\m\m\r\2\r\b\x\e\r\m\t\u\j\j\f\7\g\y\h\0\r\9\j\6\4\e\d\0\u\v\1\4\q\y\y\e\j\s\q\m\o\1\c\7\n\g\g\c\5\e\b\5\2\5\i\z\a\m\3\s\m\v\6\7\p\o\p\w\d\5\a\5\t\9\2\q\1\n\m\2\y\k\h\s\7\i\c\0\s\2\d\9\p\5\l\y\2\l\1\2\i\2\w\r\p\k\v\z\z\n\9\b\g\h\i\h\p\4\b\b\r\6\t\f\o\6\1\t\8\c\x\c\a\9\o\n\0\k\l\7\p\j\3\y\9\0\p\u\z\y\j\j\7\j\v\j\8\m\j\o\s\0\v\n\j\1\u\2\t\z\h\f\l\d\j\q\m\w\q\a\u\i\5\l\m\d\j\n\3\p\j\i\0\h\t\7\9\u\a\0\d\i\7\u\6\s\l\a\i\o\q\n\j\0\n\k\0\o\r\y\s\q\5\0\d\r\a\s\k\q\a\h\8\u\j\m\n\w\e\k\p\b\8\k\5\6\h\2\2\b\0\c\p\t\s\3\z\a\y\x\u\d\a\y\z\0\n\u\z\2\r\k\s\l\k\x\g\y\2\7\q\i\6\m\r\h\6\r\7\p\5\u\e\q\d\s\r\h\g\0\s\u\3\0\9\a\9\i\e\6\l\x\j\9\f\q\k\q\5\o\3\v\3\u\r\5\b\d\n\e\4\8\k\d\g\z\a\n\n\n\r\m\t\e\3\m\w\b\5\9\f\s\n\t\l\z\m\n\c\1\5\j\y\n\g\v\s\k\f\o\1\a\c\r\m\3\d\d\g\5\b\x\e\7\2\m\2\b\s\r\7\5\g\g\1\4\k\m\7\i\6\y\g\q\r\p\d\9\z\u\3\u\i\e\o\8\5\6\d\8\8 ]] 00:26:12.667 23:41:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:12.667 23:41:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:12.667 [2024-05-14 23:41:35.843504] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:12.667 [2024-05-14 23:41:35.843697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78767 ] 00:26:12.926 [2024-05-14 23:41:36.002688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.184 [2024-05-14 23:41:36.257709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.820  Copying: 512/512 [B] (average 500 kBps) 00:26:14.820 00:26:14.820 23:41:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5rfpt1eveztu3msz9gk3bwyce6z6b1ez0thb69bkkfybc5su4h9ki1zyk84cto6auwexbbrvzbgw9bnjrp1mgrxmpwlmmr2rbxermtujjf7gyh0r9j64ed0uv14qyyejsqmo1c7nggc5eb525izam3smv67popwd5a5t92q1nm2ykhs7ic0s2d9p5ly2l12i2wrpkvzzn9bghihp4bbr6tfo61t8cxca9on0kl7pj3y90puzyjj7jvj8mjos0vnj1u2tzhfldjqmwqaui5lmdjn3pji0ht79ua0di7u6slaioqnj0nk0orysq50draskqah8ujmnwekpb8k56h22b0cpts3zayxudayz0nuz2rkslkxgy27qi6mrh6r7p5ueqdsrhg0su309a9ie6lxj9fqkq5o3v3ur5bdne48kdgzannnrmte3mwb59fsntlzmnc15jyngvskfo1acrm3ddg5bxe72m2bsr75gg14km7i6ygqrpd9zu3uieo856d88 == \5\r\f\p\t\1\e\v\e\z\t\u\3\m\s\z\9\g\k\3\b\w\y\c\e\6\z\6\b\1\e\z\0\t\h\b\6\9\b\k\k\f\y\b\c\5\s\u\4\h\9\k\i\1\z\y\k\8\4\c\t\o\6\a\u\w\e\x\b\b\r\v\z\b\g\w\9\b\n\j\r\p\1\m\g\r\x\m\p\w\l\m\m\r\2\r\b\x\e\r\m\t\u\j\j\f\7\g\y\h\0\r\9\j\6\4\e\d\0\u\v\1\4\q\y\y\e\j\s\q\m\o\1\c\7\n\g\g\c\5\e\b\5\2\5\i\z\a\m\3\s\m\v\6\7\p\o\p\w\d\5\a\5\t\9\2\q\1\n\m\2\y\k\h\s\7\i\c\0\s\2\d\9\p\5\l\y\2\l\1\2\i\2\w\r\p\k\v\z\z\n\9\b\g\h\i\h\p\4\b\b\r\6\t\f\o\6\1\t\8\c\x\c\a\9\o\n\0\k\l\7\p\j\3\y\9\0\p\u\z\y\j\j\7\j\v\j\8\m\j\o\s\0\v\n\j\1\u\2\t\z\h\f\l\d\j\q\m\w\q\a\u\i\5\l\m\d\j\n\3\p\j\i\0\h\t\7\9\u\a\0\d\i\7\u\6\s\l\a\i\o\q\n\j\0\n\k\0\o\r\y\s\q\5\0\d\r\a\s\k\q\a\h\8\u\j\m\n\w\e\k\p\b\8\k\5\6\h\2\2\b\0\c\p\t\s\3\z\a\y\x\u\d\a\y\z\0\n\u\z\2\r\k\s\l\k\x\g\y\2\7\q\i\6\m\r\h\6\r\7\p\5\u\e\q\d\s\r\h\g\0\s\u\3\0\9\a\9\i\e\6\l\x\j\9\f\q\k\q\5\o\3\v\3\u\r\5\b\d\n\e\4\8\k\d\g\z\a\n\n\n\r\m\t\e\3\m\w\b\5\9\f\s\n\t\l\z\m\n\c\1\5\j\y\n\g\v\s\k\f\o\1\a\c\r\m\3\d\d\g\5\b\x\e\7\2\m\2\b\s\r\7\5\g\g\1\4\k\m\7\i\6\y\g\q\r\p\d\9\z\u\3\u\i\e\o\8\5\6\d\8\8 ]] 00:26:14.820 23:41:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:14.820 23:41:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:14.820 [2024-05-14 23:41:38.019637] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:14.820 [2024-05-14 23:41:38.019822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78792 ] 00:26:15.080 [2024-05-14 23:41:38.189119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.338 [2024-05-14 23:41:38.409316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.014  Copying: 512/512 [B] (average 166 kBps) 00:26:17.014 00:26:17.014 23:41:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5rfpt1eveztu3msz9gk3bwyce6z6b1ez0thb69bkkfybc5su4h9ki1zyk84cto6auwexbbrvzbgw9bnjrp1mgrxmpwlmmr2rbxermtujjf7gyh0r9j64ed0uv14qyyejsqmo1c7nggc5eb525izam3smv67popwd5a5t92q1nm2ykhs7ic0s2d9p5ly2l12i2wrpkvzzn9bghihp4bbr6tfo61t8cxca9on0kl7pj3y90puzyjj7jvj8mjos0vnj1u2tzhfldjqmwqaui5lmdjn3pji0ht79ua0di7u6slaioqnj0nk0orysq50draskqah8ujmnwekpb8k56h22b0cpts3zayxudayz0nuz2rkslkxgy27qi6mrh6r7p5ueqdsrhg0su309a9ie6lxj9fqkq5o3v3ur5bdne48kdgzannnrmte3mwb59fsntlzmnc15jyngvskfo1acrm3ddg5bxe72m2bsr75gg14km7i6ygqrpd9zu3uieo856d88 == \5\r\f\p\t\1\e\v\e\z\t\u\3\m\s\z\9\g\k\3\b\w\y\c\e\6\z\6\b\1\e\z\0\t\h\b\6\9\b\k\k\f\y\b\c\5\s\u\4\h\9\k\i\1\z\y\k\8\4\c\t\o\6\a\u\w\e\x\b\b\r\v\z\b\g\w\9\b\n\j\r\p\1\m\g\r\x\m\p\w\l\m\m\r\2\r\b\x\e\r\m\t\u\j\j\f\7\g\y\h\0\r\9\j\6\4\e\d\0\u\v\1\4\q\y\y\e\j\s\q\m\o\1\c\7\n\g\g\c\5\e\b\5\2\5\i\z\a\m\3\s\m\v\6\7\p\o\p\w\d\5\a\5\t\9\2\q\1\n\m\2\y\k\h\s\7\i\c\0\s\2\d\9\p\5\l\y\2\l\1\2\i\2\w\r\p\k\v\z\z\n\9\b\g\h\i\h\p\4\b\b\r\6\t\f\o\6\1\t\8\c\x\c\a\9\o\n\0\k\l\7\p\j\3\y\9\0\p\u\z\y\j\j\7\j\v\j\8\m\j\o\s\0\v\n\j\1\u\2\t\z\h\f\l\d\j\q\m\w\q\a\u\i\5\l\m\d\j\n\3\p\j\i\0\h\t\7\9\u\a\0\d\i\7\u\6\s\l\a\i\o\q\n\j\0\n\k\0\o\r\y\s\q\5\0\d\r\a\s\k\q\a\h\8\u\j\m\n\w\e\k\p\b\8\k\5\6\h\2\2\b\0\c\p\t\s\3\z\a\y\x\u\d\a\y\z\0\n\u\z\2\r\k\s\l\k\x\g\y\2\7\q\i\6\m\r\h\6\r\7\p\5\u\e\q\d\s\r\h\g\0\s\u\3\0\9\a\9\i\e\6\l\x\j\9\f\q\k\q\5\o\3\v\3\u\r\5\b\d\n\e\4\8\k\d\g\z\a\n\n\n\r\m\t\e\3\m\w\b\5\9\f\s\n\t\l\z\m\n\c\1\5\j\y\n\g\v\s\k\f\o\1\a\c\r\m\3\d\d\g\5\b\x\e\7\2\m\2\b\s\r\7\5\g\g\1\4\k\m\7\i\6\y\g\q\r\p\d\9\z\u\3\u\i\e\o\8\5\6\d\8\8 ]] 00:26:17.014 23:41:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:17.014 23:41:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:17.014 [2024-05-14 23:41:40.158377] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:17.014 [2024-05-14 23:41:40.158582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78821 ] 00:26:17.273 [2024-05-14 23:41:40.312275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.273 [2024-05-14 23:41:40.527930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.218  Copying: 512/512 [B] (average 250 kBps) 00:26:19.218 00:26:19.218 ************************************ 00:26:19.218 END TEST dd_flags_misc 00:26:19.218 ************************************ 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5rfpt1eveztu3msz9gk3bwyce6z6b1ez0thb69bkkfybc5su4h9ki1zyk84cto6auwexbbrvzbgw9bnjrp1mgrxmpwlmmr2rbxermtujjf7gyh0r9j64ed0uv14qyyejsqmo1c7nggc5eb525izam3smv67popwd5a5t92q1nm2ykhs7ic0s2d9p5ly2l12i2wrpkvzzn9bghihp4bbr6tfo61t8cxca9on0kl7pj3y90puzyjj7jvj8mjos0vnj1u2tzhfldjqmwqaui5lmdjn3pji0ht79ua0di7u6slaioqnj0nk0orysq50draskqah8ujmnwekpb8k56h22b0cpts3zayxudayz0nuz2rkslkxgy27qi6mrh6r7p5ueqdsrhg0su309a9ie6lxj9fqkq5o3v3ur5bdne48kdgzannnrmte3mwb59fsntlzmnc15jyngvskfo1acrm3ddg5bxe72m2bsr75gg14km7i6ygqrpd9zu3uieo856d88 == \5\r\f\p\t\1\e\v\e\z\t\u\3\m\s\z\9\g\k\3\b\w\y\c\e\6\z\6\b\1\e\z\0\t\h\b\6\9\b\k\k\f\y\b\c\5\s\u\4\h\9\k\i\1\z\y\k\8\4\c\t\o\6\a\u\w\e\x\b\b\r\v\z\b\g\w\9\b\n\j\r\p\1\m\g\r\x\m\p\w\l\m\m\r\2\r\b\x\e\r\m\t\u\j\j\f\7\g\y\h\0\r\9\j\6\4\e\d\0\u\v\1\4\q\y\y\e\j\s\q\m\o\1\c\7\n\g\g\c\5\e\b\5\2\5\i\z\a\m\3\s\m\v\6\7\p\o\p\w\d\5\a\5\t\9\2\q\1\n\m\2\y\k\h\s\7\i\c\0\s\2\d\9\p\5\l\y\2\l\1\2\i\2\w\r\p\k\v\z\z\n\9\b\g\h\i\h\p\4\b\b\r\6\t\f\o\6\1\t\8\c\x\c\a\9\o\n\0\k\l\7\p\j\3\y\9\0\p\u\z\y\j\j\7\j\v\j\8\m\j\o\s\0\v\n\j\1\u\2\t\z\h\f\l\d\j\q\m\w\q\a\u\i\5\l\m\d\j\n\3\p\j\i\0\h\t\7\9\u\a\0\d\i\7\u\6\s\l\a\i\o\q\n\j\0\n\k\0\o\r\y\s\q\5\0\d\r\a\s\k\q\a\h\8\u\j\m\n\w\e\k\p\b\8\k\5\6\h\2\2\b\0\c\p\t\s\3\z\a\y\x\u\d\a\y\z\0\n\u\z\2\r\k\s\l\k\x\g\y\2\7\q\i\6\m\r\h\6\r\7\p\5\u\e\q\d\s\r\h\g\0\s\u\3\0\9\a\9\i\e\6\l\x\j\9\f\q\k\q\5\o\3\v\3\u\r\5\b\d\n\e\4\8\k\d\g\z\a\n\n\n\r\m\t\e\3\m\w\b\5\9\f\s\n\t\l\z\m\n\c\1\5\j\y\n\g\v\s\k\f\o\1\a\c\r\m\3\d\d\g\5\b\x\e\7\2\m\2\b\s\r\7\5\g\g\1\4\k\m\7\i\6\y\g\q\r\p\d\9\z\u\3\u\i\e\o\8\5\6\d\8\8 ]] 00:26:19.218 00:26:19.218 real 0m16.999s 00:26:19.218 user 0m13.451s 00:26:19.218 sys 0m1.915s 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:26:19.218 * Second test run, using AIO 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:26:19.218 ************************************ 00:26:19.218 START TEST dd_flag_append_forced_aio 00:26:19.218 ************************************ 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=pvye36cjdkvypszwn8ydx3jyihzsqgxu 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=mzy6eko93n22upgjjnplm3htq8diovnh 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s pvye36cjdkvypszwn8ydx3jyihzsqgxu 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s mzy6eko93n22upgjjnplm3htq8diovnh 00:26:19.218 23:41:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:19.218 [2024-05-14 23:41:42.350629] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:19.219 [2024-05-14 23:41:42.350886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78872 ] 00:26:19.477 [2024-05-14 23:41:42.516751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.477 [2024-05-14 23:41:42.760894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.417  Copying: 32/32 [B] (average 31 kBps) 00:26:21.417 00:26:21.417 ************************************ 00:26:21.417 END TEST dd_flag_append_forced_aio 00:26:21.417 ************************************ 00:26:21.417 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ mzy6eko93n22upgjjnplm3htq8diovnhpvye36cjdkvypszwn8ydx3jyihzsqgxu == \m\z\y\6\e\k\o\9\3\n\2\2\u\p\g\j\j\n\p\l\m\3\h\t\q\8\d\i\o\v\n\h\p\v\y\e\3\6\c\j\d\k\v\y\p\s\z\w\n\8\y\d\x\3\j\y\i\h\z\s\q\g\x\u ]] 00:26:21.417 00:26:21.417 real 0m2.136s 00:26:21.417 user 0m1.691s 00:26:21.417 sys 0m0.244s 00:26:21.417 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:26:21.418 ************************************ 00:26:21.418 START TEST dd_flag_directory_forced_aio 00:26:21.418 ************************************ 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:21.418 23:41:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.418 [2024-05-14 23:41:44.531384] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:21.418 [2024-05-14 23:41:44.531547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78923 ] 00:26:21.418 [2024-05-14 23:41:44.699317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.677 [2024-05-14 23:41:44.915173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.244 [2024-05-14 23:41:45.271682] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:22.244 [2024-05-14 23:41:45.271773] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:22.244 [2024-05-14 23:41:45.271808] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:23.179 [2024-05-14 23:41:46.117305] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.438 23:41:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:23.438 [2024-05-14 23:41:46.649393] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:23.438 [2024-05-14 23:41:46.649561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78956 ] 00:26:23.697 [2024-05-14 23:41:46.810546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.956 [2024-05-14 23:41:47.023754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.215 [2024-05-14 23:41:47.370960] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:24.215 [2024-05-14 23:41:47.371046] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:24.215 [2024-05-14 23:41:47.371082] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:25.150 [2024-05-14 23:41:48.201156] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:26:25.409 ************************************ 00:26:25.409 END TEST dd_flag_directory_forced_aio 00:26:25.409 ************************************ 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:25.409 00:26:25.409 real 0m4.182s 00:26:25.409 user 0m3.307s 00:26:25.409 sys 0m0.480s 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:26:25.409 ************************************ 00:26:25.409 START TEST dd_flag_nofollow_forced_aio 00:26:25.409 ************************************ 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:25.409 23:41:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:25.668 [2024-05-14 23:41:48.763427] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:25.668 [2024-05-14 23:41:48.763612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79001 ] 00:26:25.668 [2024-05-14 23:41:48.915751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.926 [2024-05-14 23:41:49.131721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.501 [2024-05-14 23:41:49.477241] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:26.501 [2024-05-14 23:41:49.477338] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:26.501 [2024-05-14 23:41:49.477377] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:27.068 [2024-05-14 23:41:50.311236] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:27.636 23:41:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:27.636 [2024-05-14 23:41:50.833814] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:27.636 [2024-05-14 23:41:50.834026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79036 ] 00:26:27.898 [2024-05-14 23:41:50.997005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.157 [2024-05-14 23:41:51.267093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.415 [2024-05-14 23:41:51.679748] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:28.415 [2024-05-14 23:41:51.679853] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:28.415 [2024-05-14 23:41:51.679902] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:29.352 [2024-05-14 23:41:52.538890] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:29.920 23:41:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:29.920 [2024-05-14 23:41:53.097772] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:29.920 [2024-05-14 23:41:53.097995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79067 ] 00:26:30.179 [2024-05-14 23:41:53.257163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.438 [2024-05-14 23:41:53.504802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.073  Copying: 512/512 [B] (average 500 kBps) 00:26:32.073 00:26:32.073 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ytyhjg9g5bcqmofu4ci9an0ce8y2d74rk09u27y8btuau4baexrz272fh5xq4m1impxtvnww8htz4c02pj4o3upro4inifbf9djfs7pi9hfutbyet3holzmm94ghi502luygjnuh4up6vmva3x6b8ufnn2j2uk2c5oidnyu2oi170196ey28dwm3fyc0b2e133gmkhckl8j8x0s4gn1jdh1ixmuiw06dkzkf8o18b8akkswu7gll55upgoah8ap4copihi34skp1kfk7z0akt8un8srw138vloxsa13ktawd3hkiamragoajjkl9ogr4lech9ag48uaic4pdublujdmrqj4cx32b5zzeyonuxiiem0wz23ztkrkl6t9kjfqz04a3ojmnj8pa4haebemnbhxragpj0h8d30h2q4ywxvpzqdnzai7tfftmpq1g0asw09fkbrn630sv7x6ip2re6xzd6wuuwxoflvbeazqny4aeilfen9a03o6zm4v0rn1s == \y\t\y\h\j\g\9\g\5\b\c\q\m\o\f\u\4\c\i\9\a\n\0\c\e\8\y\2\d\7\4\r\k\0\9\u\2\7\y\8\b\t\u\a\u\4\b\a\e\x\r\z\2\7\2\f\h\5\x\q\4\m\1\i\m\p\x\t\v\n\w\w\8\h\t\z\4\c\0\2\p\j\4\o\3\u\p\r\o\4\i\n\i\f\b\f\9\d\j\f\s\7\p\i\9\h\f\u\t\b\y\e\t\3\h\o\l\z\m\m\9\4\g\h\i\5\0\2\l\u\y\g\j\n\u\h\4\u\p\6\v\m\v\a\3\x\6\b\8\u\f\n\n\2\j\2\u\k\2\c\5\o\i\d\n\y\u\2\o\i\1\7\0\1\9\6\e\y\2\8\d\w\m\3\f\y\c\0\b\2\e\1\3\3\g\m\k\h\c\k\l\8\j\8\x\0\s\4\g\n\1\j\d\h\1\i\x\m\u\i\w\0\6\d\k\z\k\f\8\o\1\8\b\8\a\k\k\s\w\u\7\g\l\l\5\5\u\p\g\o\a\h\8\a\p\4\c\o\p\i\h\i\3\4\s\k\p\1\k\f\k\7\z\0\a\k\t\8\u\n\8\s\r\w\1\3\8\v\l\o\x\s\a\1\3\k\t\a\w\d\3\h\k\i\a\m\r\a\g\o\a\j\j\k\l\9\o\g\r\4\l\e\c\h\9\a\g\4\8\u\a\i\c\4\p\d\u\b\l\u\j\d\m\r\q\j\4\c\x\3\2\b\5\z\z\e\y\o\n\u\x\i\i\e\m\0\w\z\2\3\z\t\k\r\k\l\6\t\9\k\j\f\q\z\0\4\a\3\o\j\m\n\j\8\p\a\4\h\a\e\b\e\m\n\b\h\x\r\a\g\p\j\0\h\8\d\3\0\h\2\q\4\y\w\x\v\p\z\q\d\n\z\a\i\7\t\f\f\t\m\p\q\1\g\0\a\s\w\0\9\f\k\b\r\n\6\3\0\s\v\7\x\6\i\p\2\r\e\6\x\z\d\6\w\u\u\w\x\o\f\l\v\b\e\a\z\q\n\y\4\a\e\i\l\f\e\n\9\a\0\3\o\6\z\m\4\v\0\r\n\1\s ]] 00:26:32.073 00:26:32.073 real 0m6.438s 00:26:32.073 user 0m5.145s 00:26:32.073 sys 0m0.698s 00:26:32.073 ************************************ 00:26:32.073 END TEST dd_flag_nofollow_forced_aio 00:26:32.073 ************************************ 00:26:32.073 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:32.073 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:32.073 23:41:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:26:32.073 23:41:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:32.073 23:41:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:26:32.074 ************************************ 00:26:32.074 START TEST dd_flag_noatime_forced_aio 00:26:32.074 ************************************ 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1715730113 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1715730115 00:26:32.074 23:41:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:26:33.010 23:41:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:33.010 [2024-05-14 23:41:56.263458] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:33.010 [2024-05-14 23:41:56.263667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79136 ] 00:26:33.269 [2024-05-14 23:41:56.425058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.528 [2024-05-14 23:41:56.662523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.164  Copying: 512/512 [B] (average 500 kBps) 00:26:35.164 00:26:35.164 23:41:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:35.164 23:41:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1715730113 )) 00:26:35.164 23:41:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:35.164 23:41:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1715730115 )) 00:26:35.164 23:41:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:35.164 [2024-05-14 23:41:58.417695] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:35.164 [2024-05-14 23:41:58.417960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79167 ] 00:26:35.423 [2024-05-14 23:41:58.569922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.682 [2024-05-14 23:41:58.789628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.313  Copying: 512/512 [B] (average 500 kBps) 00:26:37.313 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:37.313 ************************************ 00:26:37.313 END TEST dd_flag_noatime_forced_aio 00:26:37.313 ************************************ 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1715730119 )) 00:26:37.313 00:26:37.313 real 0m5.276s 00:26:37.313 user 0m3.376s 00:26:37.313 sys 0m0.492s 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:26:37.313 ************************************ 00:26:37.313 START TEST dd_flags_misc_forced_aio 00:26:37.313 ************************************ 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:37.313 23:42:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:37.313 [2024-05-14 23:42:00.572941] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:37.313 [2024-05-14 23:42:00.573130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79210 ] 00:26:37.572 [2024-05-14 23:42:00.725494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.830 [2024-05-14 23:42:00.941102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.467  Copying: 512/512 [B] (average 500 kBps) 00:26:39.467 00:26:39.467 23:42:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8866xadr3fn5n3fqxly1ziqcyjwe1fj9t3rpjks4b2jqi3v1wbck7brp3q61z929gah6ubnt9zqph0j4w9x6d30fbzhbu295ro4lrqrcmdymc70siphg4iw3hcgro9w8c2lnml5mvtby6wzl36gdnzmb7d7urapmgzlcelzi3afu8uzsi57xbapnoh9zzaj098su9lzq6rf2le85diktd8t374co6keoilm6oekd1tv53l3sh537wfcjd8zxae9m0sr6qjaiuqxs39u7fsxa0ycug1g6tnpvfy97ey0fwvex850v1l00t00qdb1fftdjdla7gnrtvjbjhx19mmozrhw3qyoktqs9w2xdqkx0srmv6ebn4z5ilvn440n65etve4k1h58yj1ly4ncqnpfp0wntkwcdz5r1yup0po1wlmxadash4n1fwnz6gpucim82mhnc0648vd84hswauteptpk8r5lbhewyefbfu210frkte3mzffjtp1um1u7erja9 == \8\8\6\6\x\a\d\r\3\f\n\5\n\3\f\q\x\l\y\1\z\i\q\c\y\j\w\e\1\f\j\9\t\3\r\p\j\k\s\4\b\2\j\q\i\3\v\1\w\b\c\k\7\b\r\p\3\q\6\1\z\9\2\9\g\a\h\6\u\b\n\t\9\z\q\p\h\0\j\4\w\9\x\6\d\3\0\f\b\z\h\b\u\2\9\5\r\o\4\l\r\q\r\c\m\d\y\m\c\7\0\s\i\p\h\g\4\i\w\3\h\c\g\r\o\9\w\8\c\2\l\n\m\l\5\m\v\t\b\y\6\w\z\l\3\6\g\d\n\z\m\b\7\d\7\u\r\a\p\m\g\z\l\c\e\l\z\i\3\a\f\u\8\u\z\s\i\5\7\x\b\a\p\n\o\h\9\z\z\a\j\0\9\8\s\u\9\l\z\q\6\r\f\2\l\e\8\5\d\i\k\t\d\8\t\3\7\4\c\o\6\k\e\o\i\l\m\6\o\e\k\d\1\t\v\5\3\l\3\s\h\5\3\7\w\f\c\j\d\8\z\x\a\e\9\m\0\s\r\6\q\j\a\i\u\q\x\s\3\9\u\7\f\s\x\a\0\y\c\u\g\1\g\6\t\n\p\v\f\y\9\7\e\y\0\f\w\v\e\x\8\5\0\v\1\l\0\0\t\0\0\q\d\b\1\f\f\t\d\j\d\l\a\7\g\n\r\t\v\j\b\j\h\x\1\9\m\m\o\z\r\h\w\3\q\y\o\k\t\q\s\9\w\2\x\d\q\k\x\0\s\r\m\v\6\e\b\n\4\z\5\i\l\v\n\4\4\0\n\6\5\e\t\v\e\4\k\1\h\5\8\y\j\1\l\y\4\n\c\q\n\p\f\p\0\w\n\t\k\w\c\d\z\5\r\1\y\u\p\0\p\o\1\w\l\m\x\a\d\a\s\h\4\n\1\f\w\n\z\6\g\p\u\c\i\m\8\2\m\h\n\c\0\6\4\8\v\d\8\4\h\s\w\a\u\t\e\p\t\p\k\8\r\5\l\b\h\e\w\y\e\f\b\f\u\2\1\0\f\r\k\t\e\3\m\z\f\f\j\t\p\1\u\m\1\u\7\e\r\j\a\9 ]] 00:26:39.467 23:42:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:39.467 23:42:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:39.467 [2024-05-14 23:42:02.653447] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:39.467 [2024-05-14 23:42:02.653624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79243 ] 00:26:39.724 [2024-05-14 23:42:02.804242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.981 [2024-05-14 23:42:03.011979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.616  Copying: 512/512 [B] (average 500 kBps) 00:26:41.616 00:26:41.616 23:42:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8866xadr3fn5n3fqxly1ziqcyjwe1fj9t3rpjks4b2jqi3v1wbck7brp3q61z929gah6ubnt9zqph0j4w9x6d30fbzhbu295ro4lrqrcmdymc70siphg4iw3hcgro9w8c2lnml5mvtby6wzl36gdnzmb7d7urapmgzlcelzi3afu8uzsi57xbapnoh9zzaj098su9lzq6rf2le85diktd8t374co6keoilm6oekd1tv53l3sh537wfcjd8zxae9m0sr6qjaiuqxs39u7fsxa0ycug1g6tnpvfy97ey0fwvex850v1l00t00qdb1fftdjdla7gnrtvjbjhx19mmozrhw3qyoktqs9w2xdqkx0srmv6ebn4z5ilvn440n65etve4k1h58yj1ly4ncqnpfp0wntkwcdz5r1yup0po1wlmxadash4n1fwnz6gpucim82mhnc0648vd84hswauteptpk8r5lbhewyefbfu210frkte3mzffjtp1um1u7erja9 == \8\8\6\6\x\a\d\r\3\f\n\5\n\3\f\q\x\l\y\1\z\i\q\c\y\j\w\e\1\f\j\9\t\3\r\p\j\k\s\4\b\2\j\q\i\3\v\1\w\b\c\k\7\b\r\p\3\q\6\1\z\9\2\9\g\a\h\6\u\b\n\t\9\z\q\p\h\0\j\4\w\9\x\6\d\3\0\f\b\z\h\b\u\2\9\5\r\o\4\l\r\q\r\c\m\d\y\m\c\7\0\s\i\p\h\g\4\i\w\3\h\c\g\r\o\9\w\8\c\2\l\n\m\l\5\m\v\t\b\y\6\w\z\l\3\6\g\d\n\z\m\b\7\d\7\u\r\a\p\m\g\z\l\c\e\l\z\i\3\a\f\u\8\u\z\s\i\5\7\x\b\a\p\n\o\h\9\z\z\a\j\0\9\8\s\u\9\l\z\q\6\r\f\2\l\e\8\5\d\i\k\t\d\8\t\3\7\4\c\o\6\k\e\o\i\l\m\6\o\e\k\d\1\t\v\5\3\l\3\s\h\5\3\7\w\f\c\j\d\8\z\x\a\e\9\m\0\s\r\6\q\j\a\i\u\q\x\s\3\9\u\7\f\s\x\a\0\y\c\u\g\1\g\6\t\n\p\v\f\y\9\7\e\y\0\f\w\v\e\x\8\5\0\v\1\l\0\0\t\0\0\q\d\b\1\f\f\t\d\j\d\l\a\7\g\n\r\t\v\j\b\j\h\x\1\9\m\m\o\z\r\h\w\3\q\y\o\k\t\q\s\9\w\2\x\d\q\k\x\0\s\r\m\v\6\e\b\n\4\z\5\i\l\v\n\4\4\0\n\6\5\e\t\v\e\4\k\1\h\5\8\y\j\1\l\y\4\n\c\q\n\p\f\p\0\w\n\t\k\w\c\d\z\5\r\1\y\u\p\0\p\o\1\w\l\m\x\a\d\a\s\h\4\n\1\f\w\n\z\6\g\p\u\c\i\m\8\2\m\h\n\c\0\6\4\8\v\d\8\4\h\s\w\a\u\t\e\p\t\p\k\8\r\5\l\b\h\e\w\y\e\f\b\f\u\2\1\0\f\r\k\t\e\3\m\z\f\f\j\t\p\1\u\m\1\u\7\e\r\j\a\9 ]] 00:26:41.616 23:42:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:41.616 23:42:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:41.616 [2024-05-14 23:42:04.700140] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:41.616 [2024-05-14 23:42:04.700330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79274 ] 00:26:41.616 [2024-05-14 23:42:04.852242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.875 [2024-05-14 23:42:05.061310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.505  Copying: 512/512 [B] (average 250 kBps) 00:26:43.505 00:26:43.505 23:42:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8866xadr3fn5n3fqxly1ziqcyjwe1fj9t3rpjks4b2jqi3v1wbck7brp3q61z929gah6ubnt9zqph0j4w9x6d30fbzhbu295ro4lrqrcmdymc70siphg4iw3hcgro9w8c2lnml5mvtby6wzl36gdnzmb7d7urapmgzlcelzi3afu8uzsi57xbapnoh9zzaj098su9lzq6rf2le85diktd8t374co6keoilm6oekd1tv53l3sh537wfcjd8zxae9m0sr6qjaiuqxs39u7fsxa0ycug1g6tnpvfy97ey0fwvex850v1l00t00qdb1fftdjdla7gnrtvjbjhx19mmozrhw3qyoktqs9w2xdqkx0srmv6ebn4z5ilvn440n65etve4k1h58yj1ly4ncqnpfp0wntkwcdz5r1yup0po1wlmxadash4n1fwnz6gpucim82mhnc0648vd84hswauteptpk8r5lbhewyefbfu210frkte3mzffjtp1um1u7erja9 == \8\8\6\6\x\a\d\r\3\f\n\5\n\3\f\q\x\l\y\1\z\i\q\c\y\j\w\e\1\f\j\9\t\3\r\p\j\k\s\4\b\2\j\q\i\3\v\1\w\b\c\k\7\b\r\p\3\q\6\1\z\9\2\9\g\a\h\6\u\b\n\t\9\z\q\p\h\0\j\4\w\9\x\6\d\3\0\f\b\z\h\b\u\2\9\5\r\o\4\l\r\q\r\c\m\d\y\m\c\7\0\s\i\p\h\g\4\i\w\3\h\c\g\r\o\9\w\8\c\2\l\n\m\l\5\m\v\t\b\y\6\w\z\l\3\6\g\d\n\z\m\b\7\d\7\u\r\a\p\m\g\z\l\c\e\l\z\i\3\a\f\u\8\u\z\s\i\5\7\x\b\a\p\n\o\h\9\z\z\a\j\0\9\8\s\u\9\l\z\q\6\r\f\2\l\e\8\5\d\i\k\t\d\8\t\3\7\4\c\o\6\k\e\o\i\l\m\6\o\e\k\d\1\t\v\5\3\l\3\s\h\5\3\7\w\f\c\j\d\8\z\x\a\e\9\m\0\s\r\6\q\j\a\i\u\q\x\s\3\9\u\7\f\s\x\a\0\y\c\u\g\1\g\6\t\n\p\v\f\y\9\7\e\y\0\f\w\v\e\x\8\5\0\v\1\l\0\0\t\0\0\q\d\b\1\f\f\t\d\j\d\l\a\7\g\n\r\t\v\j\b\j\h\x\1\9\m\m\o\z\r\h\w\3\q\y\o\k\t\q\s\9\w\2\x\d\q\k\x\0\s\r\m\v\6\e\b\n\4\z\5\i\l\v\n\4\4\0\n\6\5\e\t\v\e\4\k\1\h\5\8\y\j\1\l\y\4\n\c\q\n\p\f\p\0\w\n\t\k\w\c\d\z\5\r\1\y\u\p\0\p\o\1\w\l\m\x\a\d\a\s\h\4\n\1\f\w\n\z\6\g\p\u\c\i\m\8\2\m\h\n\c\0\6\4\8\v\d\8\4\h\s\w\a\u\t\e\p\t\p\k\8\r\5\l\b\h\e\w\y\e\f\b\f\u\2\1\0\f\r\k\t\e\3\m\z\f\f\j\t\p\1\u\m\1\u\7\e\r\j\a\9 ]] 00:26:43.505 23:42:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:43.505 23:42:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:43.505 [2024-05-14 23:42:06.737734] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:43.505 [2024-05-14 23:42:06.737910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79299 ] 00:26:43.762 [2024-05-14 23:42:06.900753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.020 [2024-05-14 23:42:07.110638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.655  Copying: 512/512 [B] (average 250 kBps) 00:26:45.655 00:26:45.655 23:42:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8866xadr3fn5n3fqxly1ziqcyjwe1fj9t3rpjks4b2jqi3v1wbck7brp3q61z929gah6ubnt9zqph0j4w9x6d30fbzhbu295ro4lrqrcmdymc70siphg4iw3hcgro9w8c2lnml5mvtby6wzl36gdnzmb7d7urapmgzlcelzi3afu8uzsi57xbapnoh9zzaj098su9lzq6rf2le85diktd8t374co6keoilm6oekd1tv53l3sh537wfcjd8zxae9m0sr6qjaiuqxs39u7fsxa0ycug1g6tnpvfy97ey0fwvex850v1l00t00qdb1fftdjdla7gnrtvjbjhx19mmozrhw3qyoktqs9w2xdqkx0srmv6ebn4z5ilvn440n65etve4k1h58yj1ly4ncqnpfp0wntkwcdz5r1yup0po1wlmxadash4n1fwnz6gpucim82mhnc0648vd84hswauteptpk8r5lbhewyefbfu210frkte3mzffjtp1um1u7erja9 == \8\8\6\6\x\a\d\r\3\f\n\5\n\3\f\q\x\l\y\1\z\i\q\c\y\j\w\e\1\f\j\9\t\3\r\p\j\k\s\4\b\2\j\q\i\3\v\1\w\b\c\k\7\b\r\p\3\q\6\1\z\9\2\9\g\a\h\6\u\b\n\t\9\z\q\p\h\0\j\4\w\9\x\6\d\3\0\f\b\z\h\b\u\2\9\5\r\o\4\l\r\q\r\c\m\d\y\m\c\7\0\s\i\p\h\g\4\i\w\3\h\c\g\r\o\9\w\8\c\2\l\n\m\l\5\m\v\t\b\y\6\w\z\l\3\6\g\d\n\z\m\b\7\d\7\u\r\a\p\m\g\z\l\c\e\l\z\i\3\a\f\u\8\u\z\s\i\5\7\x\b\a\p\n\o\h\9\z\z\a\j\0\9\8\s\u\9\l\z\q\6\r\f\2\l\e\8\5\d\i\k\t\d\8\t\3\7\4\c\o\6\k\e\o\i\l\m\6\o\e\k\d\1\t\v\5\3\l\3\s\h\5\3\7\w\f\c\j\d\8\z\x\a\e\9\m\0\s\r\6\q\j\a\i\u\q\x\s\3\9\u\7\f\s\x\a\0\y\c\u\g\1\g\6\t\n\p\v\f\y\9\7\e\y\0\f\w\v\e\x\8\5\0\v\1\l\0\0\t\0\0\q\d\b\1\f\f\t\d\j\d\l\a\7\g\n\r\t\v\j\b\j\h\x\1\9\m\m\o\z\r\h\w\3\q\y\o\k\t\q\s\9\w\2\x\d\q\k\x\0\s\r\m\v\6\e\b\n\4\z\5\i\l\v\n\4\4\0\n\6\5\e\t\v\e\4\k\1\h\5\8\y\j\1\l\y\4\n\c\q\n\p\f\p\0\w\n\t\k\w\c\d\z\5\r\1\y\u\p\0\p\o\1\w\l\m\x\a\d\a\s\h\4\n\1\f\w\n\z\6\g\p\u\c\i\m\8\2\m\h\n\c\0\6\4\8\v\d\8\4\h\s\w\a\u\t\e\p\t\p\k\8\r\5\l\b\h\e\w\y\e\f\b\f\u\2\1\0\f\r\k\t\e\3\m\z\f\f\j\t\p\1\u\m\1\u\7\e\r\j\a\9 ]] 00:26:45.655 23:42:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:45.655 23:42:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:26:45.655 23:42:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:26:45.655 23:42:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:45.655 23:42:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:45.655 23:42:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:45.655 [2024-05-14 23:42:08.822953] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:45.655 [2024-05-14 23:42:08.823130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79327 ] 00:26:45.912 [2024-05-14 23:42:08.983404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.912 [2024-05-14 23:42:09.192472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.416  Copying: 512/512 [B] (average 500 kBps) 00:26:47.416 00:26:47.416 23:42:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xwlhfna15c9h6jjb377gqvz4t4pay71dmxc3nz8ok1dghmve3ztcv31ghf9dxh2to8vv2qe47byb87ginkzvqf4ak875q49ziv0mnz8dhwna18noyn02tkkd37xi06dovxunifd5fmerrv5wmm6if9d8cz5pfzuip8xmqk5gnco1vryoprjeswan07wb5j8q7igryj46m4ozcqkixvg7jdjq3z2l81uznoivf8g2f2vifqak1gmujyktpo7v710j0k45gx9ogfwmlkzea73s0r84qx44u7bmuqta2k45fctw737jijplcnkwscf734nzaiwrjsxdgwvtcou828ufs7n5pblghkci0zw5p3vbxvekamgryihrjdmbbpob6je6x8uddjtvnm4tcb2wk54smh9r2scuijl9t1r0drnd9m1tg50ln9tulcnsp1v4ez0j1efoqos7xkz9jbppl5j3ylzp8aqgo5ksajwz9ddv43wol09v7de87ji14jn3bq9x == \x\w\l\h\f\n\a\1\5\c\9\h\6\j\j\b\3\7\7\g\q\v\z\4\t\4\p\a\y\7\1\d\m\x\c\3\n\z\8\o\k\1\d\g\h\m\v\e\3\z\t\c\v\3\1\g\h\f\9\d\x\h\2\t\o\8\v\v\2\q\e\4\7\b\y\b\8\7\g\i\n\k\z\v\q\f\4\a\k\8\7\5\q\4\9\z\i\v\0\m\n\z\8\d\h\w\n\a\1\8\n\o\y\n\0\2\t\k\k\d\3\7\x\i\0\6\d\o\v\x\u\n\i\f\d\5\f\m\e\r\r\v\5\w\m\m\6\i\f\9\d\8\c\z\5\p\f\z\u\i\p\8\x\m\q\k\5\g\n\c\o\1\v\r\y\o\p\r\j\e\s\w\a\n\0\7\w\b\5\j\8\q\7\i\g\r\y\j\4\6\m\4\o\z\c\q\k\i\x\v\g\7\j\d\j\q\3\z\2\l\8\1\u\z\n\o\i\v\f\8\g\2\f\2\v\i\f\q\a\k\1\g\m\u\j\y\k\t\p\o\7\v\7\1\0\j\0\k\4\5\g\x\9\o\g\f\w\m\l\k\z\e\a\7\3\s\0\r\8\4\q\x\4\4\u\7\b\m\u\q\t\a\2\k\4\5\f\c\t\w\7\3\7\j\i\j\p\l\c\n\k\w\s\c\f\7\3\4\n\z\a\i\w\r\j\s\x\d\g\w\v\t\c\o\u\8\2\8\u\f\s\7\n\5\p\b\l\g\h\k\c\i\0\z\w\5\p\3\v\b\x\v\e\k\a\m\g\r\y\i\h\r\j\d\m\b\b\p\o\b\6\j\e\6\x\8\u\d\d\j\t\v\n\m\4\t\c\b\2\w\k\5\4\s\m\h\9\r\2\s\c\u\i\j\l\9\t\1\r\0\d\r\n\d\9\m\1\t\g\5\0\l\n\9\t\u\l\c\n\s\p\1\v\4\e\z\0\j\1\e\f\o\q\o\s\7\x\k\z\9\j\b\p\p\l\5\j\3\y\l\z\p\8\a\q\g\o\5\k\s\a\j\w\z\9\d\d\v\4\3\w\o\l\0\9\v\7\d\e\8\7\j\i\1\4\j\n\3\b\q\9\x ]] 00:26:47.416 23:42:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:47.416 23:42:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:47.673 [2024-05-14 23:42:10.813954] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:47.673 [2024-05-14 23:42:10.814133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79352 ] 00:26:47.932 [2024-05-14 23:42:10.964515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.932 [2024-05-14 23:42:11.174686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.433  Copying: 512/512 [B] (average 500 kBps) 00:26:49.433 00:26:49.433 23:42:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xwlhfna15c9h6jjb377gqvz4t4pay71dmxc3nz8ok1dghmve3ztcv31ghf9dxh2to8vv2qe47byb87ginkzvqf4ak875q49ziv0mnz8dhwna18noyn02tkkd37xi06dovxunifd5fmerrv5wmm6if9d8cz5pfzuip8xmqk5gnco1vryoprjeswan07wb5j8q7igryj46m4ozcqkixvg7jdjq3z2l81uznoivf8g2f2vifqak1gmujyktpo7v710j0k45gx9ogfwmlkzea73s0r84qx44u7bmuqta2k45fctw737jijplcnkwscf734nzaiwrjsxdgwvtcou828ufs7n5pblghkci0zw5p3vbxvekamgryihrjdmbbpob6je6x8uddjtvnm4tcb2wk54smh9r2scuijl9t1r0drnd9m1tg50ln9tulcnsp1v4ez0j1efoqos7xkz9jbppl5j3ylzp8aqgo5ksajwz9ddv43wol09v7de87ji14jn3bq9x == \x\w\l\h\f\n\a\1\5\c\9\h\6\j\j\b\3\7\7\g\q\v\z\4\t\4\p\a\y\7\1\d\m\x\c\3\n\z\8\o\k\1\d\g\h\m\v\e\3\z\t\c\v\3\1\g\h\f\9\d\x\h\2\t\o\8\v\v\2\q\e\4\7\b\y\b\8\7\g\i\n\k\z\v\q\f\4\a\k\8\7\5\q\4\9\z\i\v\0\m\n\z\8\d\h\w\n\a\1\8\n\o\y\n\0\2\t\k\k\d\3\7\x\i\0\6\d\o\v\x\u\n\i\f\d\5\f\m\e\r\r\v\5\w\m\m\6\i\f\9\d\8\c\z\5\p\f\z\u\i\p\8\x\m\q\k\5\g\n\c\o\1\v\r\y\o\p\r\j\e\s\w\a\n\0\7\w\b\5\j\8\q\7\i\g\r\y\j\4\6\m\4\o\z\c\q\k\i\x\v\g\7\j\d\j\q\3\z\2\l\8\1\u\z\n\o\i\v\f\8\g\2\f\2\v\i\f\q\a\k\1\g\m\u\j\y\k\t\p\o\7\v\7\1\0\j\0\k\4\5\g\x\9\o\g\f\w\m\l\k\z\e\a\7\3\s\0\r\8\4\q\x\4\4\u\7\b\m\u\q\t\a\2\k\4\5\f\c\t\w\7\3\7\j\i\j\p\l\c\n\k\w\s\c\f\7\3\4\n\z\a\i\w\r\j\s\x\d\g\w\v\t\c\o\u\8\2\8\u\f\s\7\n\5\p\b\l\g\h\k\c\i\0\z\w\5\p\3\v\b\x\v\e\k\a\m\g\r\y\i\h\r\j\d\m\b\b\p\o\b\6\j\e\6\x\8\u\d\d\j\t\v\n\m\4\t\c\b\2\w\k\5\4\s\m\h\9\r\2\s\c\u\i\j\l\9\t\1\r\0\d\r\n\d\9\m\1\t\g\5\0\l\n\9\t\u\l\c\n\s\p\1\v\4\e\z\0\j\1\e\f\o\q\o\s\7\x\k\z\9\j\b\p\p\l\5\j\3\y\l\z\p\8\a\q\g\o\5\k\s\a\j\w\z\9\d\d\v\4\3\w\o\l\0\9\v\7\d\e\8\7\j\i\1\4\j\n\3\b\q\9\x ]] 00:26:49.433 23:42:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:49.433 23:42:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:49.692 [2024-05-14 23:42:12.822557] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:49.692 [2024-05-14 23:42:12.822754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79381 ] 00:26:49.692 [2024-05-14 23:42:12.971592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.950 [2024-05-14 23:42:13.179742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.451  Copying: 512/512 [B] (average 250 kBps) 00:26:51.451 00:26:51.451 23:42:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xwlhfna15c9h6jjb377gqvz4t4pay71dmxc3nz8ok1dghmve3ztcv31ghf9dxh2to8vv2qe47byb87ginkzvqf4ak875q49ziv0mnz8dhwna18noyn02tkkd37xi06dovxunifd5fmerrv5wmm6if9d8cz5pfzuip8xmqk5gnco1vryoprjeswan07wb5j8q7igryj46m4ozcqkixvg7jdjq3z2l81uznoivf8g2f2vifqak1gmujyktpo7v710j0k45gx9ogfwmlkzea73s0r84qx44u7bmuqta2k45fctw737jijplcnkwscf734nzaiwrjsxdgwvtcou828ufs7n5pblghkci0zw5p3vbxvekamgryihrjdmbbpob6je6x8uddjtvnm4tcb2wk54smh9r2scuijl9t1r0drnd9m1tg50ln9tulcnsp1v4ez0j1efoqos7xkz9jbppl5j3ylzp8aqgo5ksajwz9ddv43wol09v7de87ji14jn3bq9x == \x\w\l\h\f\n\a\1\5\c\9\h\6\j\j\b\3\7\7\g\q\v\z\4\t\4\p\a\y\7\1\d\m\x\c\3\n\z\8\o\k\1\d\g\h\m\v\e\3\z\t\c\v\3\1\g\h\f\9\d\x\h\2\t\o\8\v\v\2\q\e\4\7\b\y\b\8\7\g\i\n\k\z\v\q\f\4\a\k\8\7\5\q\4\9\z\i\v\0\m\n\z\8\d\h\w\n\a\1\8\n\o\y\n\0\2\t\k\k\d\3\7\x\i\0\6\d\o\v\x\u\n\i\f\d\5\f\m\e\r\r\v\5\w\m\m\6\i\f\9\d\8\c\z\5\p\f\z\u\i\p\8\x\m\q\k\5\g\n\c\o\1\v\r\y\o\p\r\j\e\s\w\a\n\0\7\w\b\5\j\8\q\7\i\g\r\y\j\4\6\m\4\o\z\c\q\k\i\x\v\g\7\j\d\j\q\3\z\2\l\8\1\u\z\n\o\i\v\f\8\g\2\f\2\v\i\f\q\a\k\1\g\m\u\j\y\k\t\p\o\7\v\7\1\0\j\0\k\4\5\g\x\9\o\g\f\w\m\l\k\z\e\a\7\3\s\0\r\8\4\q\x\4\4\u\7\b\m\u\q\t\a\2\k\4\5\f\c\t\w\7\3\7\j\i\j\p\l\c\n\k\w\s\c\f\7\3\4\n\z\a\i\w\r\j\s\x\d\g\w\v\t\c\o\u\8\2\8\u\f\s\7\n\5\p\b\l\g\h\k\c\i\0\z\w\5\p\3\v\b\x\v\e\k\a\m\g\r\y\i\h\r\j\d\m\b\b\p\o\b\6\j\e\6\x\8\u\d\d\j\t\v\n\m\4\t\c\b\2\w\k\5\4\s\m\h\9\r\2\s\c\u\i\j\l\9\t\1\r\0\d\r\n\d\9\m\1\t\g\5\0\l\n\9\t\u\l\c\n\s\p\1\v\4\e\z\0\j\1\e\f\o\q\o\s\7\x\k\z\9\j\b\p\p\l\5\j\3\y\l\z\p\8\a\q\g\o\5\k\s\a\j\w\z\9\d\d\v\4\3\w\o\l\0\9\v\7\d\e\8\7\j\i\1\4\j\n\3\b\q\9\x ]] 00:26:51.451 23:42:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:51.451 23:42:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:51.709 [2024-05-14 23:42:14.849003] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:51.709 [2024-05-14 23:42:14.849217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79410 ] 00:26:51.967 [2024-05-14 23:42:14.999029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.967 [2024-05-14 23:42:15.207382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.704  Copying: 512/512 [B] (average 166 kBps) 00:26:53.704 00:26:53.704 ************************************ 00:26:53.704 END TEST dd_flags_misc_forced_aio 00:26:53.704 ************************************ 00:26:53.704 23:42:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xwlhfna15c9h6jjb377gqvz4t4pay71dmxc3nz8ok1dghmve3ztcv31ghf9dxh2to8vv2qe47byb87ginkzvqf4ak875q49ziv0mnz8dhwna18noyn02tkkd37xi06dovxunifd5fmerrv5wmm6if9d8cz5pfzuip8xmqk5gnco1vryoprjeswan07wb5j8q7igryj46m4ozcqkixvg7jdjq3z2l81uznoivf8g2f2vifqak1gmujyktpo7v710j0k45gx9ogfwmlkzea73s0r84qx44u7bmuqta2k45fctw737jijplcnkwscf734nzaiwrjsxdgwvtcou828ufs7n5pblghkci0zw5p3vbxvekamgryihrjdmbbpob6je6x8uddjtvnm4tcb2wk54smh9r2scuijl9t1r0drnd9m1tg50ln9tulcnsp1v4ez0j1efoqos7xkz9jbppl5j3ylzp8aqgo5ksajwz9ddv43wol09v7de87ji14jn3bq9x == \x\w\l\h\f\n\a\1\5\c\9\h\6\j\j\b\3\7\7\g\q\v\z\4\t\4\p\a\y\7\1\d\m\x\c\3\n\z\8\o\k\1\d\g\h\m\v\e\3\z\t\c\v\3\1\g\h\f\9\d\x\h\2\t\o\8\v\v\2\q\e\4\7\b\y\b\8\7\g\i\n\k\z\v\q\f\4\a\k\8\7\5\q\4\9\z\i\v\0\m\n\z\8\d\h\w\n\a\1\8\n\o\y\n\0\2\t\k\k\d\3\7\x\i\0\6\d\o\v\x\u\n\i\f\d\5\f\m\e\r\r\v\5\w\m\m\6\i\f\9\d\8\c\z\5\p\f\z\u\i\p\8\x\m\q\k\5\g\n\c\o\1\v\r\y\o\p\r\j\e\s\w\a\n\0\7\w\b\5\j\8\q\7\i\g\r\y\j\4\6\m\4\o\z\c\q\k\i\x\v\g\7\j\d\j\q\3\z\2\l\8\1\u\z\n\o\i\v\f\8\g\2\f\2\v\i\f\q\a\k\1\g\m\u\j\y\k\t\p\o\7\v\7\1\0\j\0\k\4\5\g\x\9\o\g\f\w\m\l\k\z\e\a\7\3\s\0\r\8\4\q\x\4\4\u\7\b\m\u\q\t\a\2\k\4\5\f\c\t\w\7\3\7\j\i\j\p\l\c\n\k\w\s\c\f\7\3\4\n\z\a\i\w\r\j\s\x\d\g\w\v\t\c\o\u\8\2\8\u\f\s\7\n\5\p\b\l\g\h\k\c\i\0\z\w\5\p\3\v\b\x\v\e\k\a\m\g\r\y\i\h\r\j\d\m\b\b\p\o\b\6\j\e\6\x\8\u\d\d\j\t\v\n\m\4\t\c\b\2\w\k\5\4\s\m\h\9\r\2\s\c\u\i\j\l\9\t\1\r\0\d\r\n\d\9\m\1\t\g\5\0\l\n\9\t\u\l\c\n\s\p\1\v\4\e\z\0\j\1\e\f\o\q\o\s\7\x\k\z\9\j\b\p\p\l\5\j\3\y\l\z\p\8\a\q\g\o\5\k\s\a\j\w\z\9\d\d\v\4\3\w\o\l\0\9\v\7\d\e\8\7\j\i\1\4\j\n\3\b\q\9\x ]] 00:26:53.704 00:26:53.704 real 0m16.303s 00:26:53.704 user 0m12.858s 00:26:53.704 sys 0m1.813s 00:26:53.704 23:42:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:53.704 23:42:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:26:53.704 23:42:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:26:53.704 23:42:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:53.704 23:42:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:53.704 ************************************ 00:26:53.704 END TEST spdk_dd_posix 00:26:53.704 ************************************ 00:26:53.704 00:26:53.704 real 1m9.565s 00:26:53.704 user 0m53.166s 00:26:53.704 sys 0m7.882s 00:26:53.704 23:42:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:53.704 23:42:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:26:53.704 23:42:16 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:53.704 23:42:16 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:53.704 23:42:16 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:53.704 23:42:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:26:53.704 ************************************ 00:26:53.704 START TEST spdk_dd_malloc 00:26:53.704 ************************************ 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:53.704 * Looking for test storage... 00:26:53.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:26:53.704 ************************************ 00:26:53.704 START TEST dd_malloc_copy 00:26:53.704 ************************************ 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:26:53.704 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:26:53.705 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:26:53.705 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:26:53.705 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:26:53.705 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:26:53.705 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:26:53.705 23:42:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:26:53.705 { 00:26:53.705 "subsystems": [ 00:26:53.705 { 00:26:53.705 "subsystem": "bdev", 00:26:53.705 "config": [ 00:26:53.705 { 00:26:53.705 "params": { 00:26:53.705 "block_size": 512, 00:26:53.705 "name": "malloc0", 00:26:53.705 "num_blocks": 1048576 00:26:53.705 }, 00:26:53.705 "method": "bdev_malloc_create" 00:26:53.705 }, 00:26:53.705 { 00:26:53.705 "params": { 00:26:53.705 "block_size": 512, 00:26:53.705 "name": "malloc1", 00:26:53.705 "num_blocks": 1048576 00:26:53.705 }, 00:26:53.705 "method": "bdev_malloc_create" 00:26:53.705 }, 00:26:53.705 { 00:26:53.705 "method": "bdev_wait_for_examine" 00:26:53.705 } 00:26:53.705 ] 00:26:53.705 } 00:26:53.705 ] 00:26:53.705 } 00:26:53.964 [2024-05-14 23:42:17.049095] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:26:53.964 [2024-05-14 23:42:17.049529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79516 ] 00:26:53.964 [2024-05-14 23:42:17.200114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.222 [2024-05-14 23:42:17.402734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.946  Copying: 492/512 [MB] (492 MBps) Copying: 512/512 [MB] (average 491 MBps) 00:27:00.946 00:27:00.946 23:42:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:27:00.946 23:42:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:27:00.946 23:42:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:27:00.946 23:42:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:27:00.946 { 00:27:00.946 "subsystems": [ 00:27:00.946 { 00:27:00.946 "subsystem": "bdev", 00:27:00.946 "config": [ 00:27:00.946 { 00:27:00.946 "params": { 00:27:00.946 "block_size": 512, 00:27:00.946 "name": "malloc0", 00:27:00.946 "num_blocks": 1048576 00:27:00.946 }, 00:27:00.946 "method": "bdev_malloc_create" 00:27:00.946 }, 00:27:00.946 { 00:27:00.946 "params": { 00:27:00.946 "block_size": 512, 00:27:00.946 "name": "malloc1", 00:27:00.946 "num_blocks": 1048576 00:27:00.946 }, 00:27:00.946 "method": "bdev_malloc_create" 00:27:00.946 }, 00:27:00.946 { 00:27:00.946 "method": "bdev_wait_for_examine" 00:27:00.946 } 00:27:00.946 ] 00:27:00.946 } 00:27:00.946 ] 00:27:00.946 } 00:27:00.946 [2024-05-14 23:42:23.573518] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:00.946 [2024-05-14 23:42:23.573830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79604 ] 00:27:00.946 [2024-05-14 23:42:23.741627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.946 [2024-05-14 23:42:23.958357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.810  Copying: 499/512 [MB] (499 MBps) Copying: 512/512 [MB] (average 499 MBps) 00:27:06.810 00:27:06.810 ************************************ 00:27:06.810 END TEST dd_malloc_copy 00:27:06.810 ************************************ 00:27:06.810 00:27:06.810 real 0m13.115s 00:27:06.810 user 0m11.616s 00:27:06.810 sys 0m1.219s 00:27:06.810 23:42:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:06.810 23:42:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:27:06.810 00:27:06.810 real 0m13.249s 00:27:06.810 user 0m11.666s 00:27:06.810 sys 0m1.303s 00:27:06.810 23:42:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:06.810 23:42:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:27:06.810 ************************************ 00:27:06.810 END TEST spdk_dd_malloc 00:27:06.810 ************************************ 00:27:06.810 23:42:30 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:27:06.810 23:42:30 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:06.810 23:42:30 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:06.810 23:42:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:27:07.069 ************************************ 00:27:07.069 START TEST spdk_dd_bdev_to_bdev 00:27:07.069 ************************************ 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:27:07.069 * Looking for test storage... 00:27:07.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:27:07.069 23:42:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:27:07.069 [2024-05-14 23:42:30.341447] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:07.069 [2024-05-14 23:42:30.341672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79755 ] 00:27:07.329 [2024-05-14 23:42:30.505744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.588 [2024-05-14 23:42:30.764082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.534  Copying: 256/256 [MB] (average 1765 MBps) 00:27:09.534 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:09.534 ************************************ 00:27:09.534 START TEST dd_inflate_file 00:27:09.534 ************************************ 00:27:09.534 23:42:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:09.534 [2024-05-14 23:42:32.632844] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:09.534 [2024-05-14 23:42:32.633055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79797 ] 00:27:09.534 [2024-05-14 23:42:32.795208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.792 [2024-05-14 23:42:33.009023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.736  Copying: 64/64 [MB] (average 1828 MBps) 00:27:11.736 00:27:11.736 ************************************ 00:27:11.736 END TEST dd_inflate_file 00:27:11.736 ************************************ 00:27:11.736 00:27:11.736 real 0m2.153s 00:27:11.736 user 0m1.696s 00:27:11.736 sys 0m0.256s 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:11.736 ************************************ 00:27:11.736 START TEST dd_copy_to_out_bdev 00:27:11.736 ************************************ 00:27:11.736 23:42:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:11.736 { 00:27:11.736 "subsystems": [ 00:27:11.736 { 00:27:11.736 "subsystem": "bdev", 00:27:11.736 "config": [ 00:27:11.736 { 00:27:11.736 "params": { 00:27:11.736 "block_size": 4096, 00:27:11.736 "name": "aio1", 00:27:11.736 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:27:11.736 }, 00:27:11.736 "method": "bdev_aio_create" 00:27:11.736 }, 00:27:11.736 { 00:27:11.736 "params": { 00:27:11.736 "trtype": "pcie", 00:27:11.736 "name": "Nvme0", 00:27:11.736 "traddr": "0000:00:10.0" 00:27:11.736 }, 00:27:11.737 "method": "bdev_nvme_attach_controller" 00:27:11.737 }, 00:27:11.737 { 00:27:11.737 "method": "bdev_wait_for_examine" 00:27:11.737 } 00:27:11.737 ] 00:27:11.737 } 00:27:11.737 ] 00:27:11.737 } 00:27:11.737 [2024-05-14 23:42:34.832700] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:11.737 [2024-05-14 23:42:34.832872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79857 ] 00:27:11.737 [2024-05-14 23:42:34.980414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.083 [2024-05-14 23:42:35.174956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.094  Copying: 52/64 [MB] (52 MBps) Copying: 64/64 [MB] (average 52 MBps) 00:27:15.094 00:27:15.094 ************************************ 00:27:15.094 END TEST dd_copy_to_out_bdev 00:27:15.094 ************************************ 00:27:15.094 00:27:15.094 real 0m3.275s 00:27:15.094 user 0m2.844s 00:27:15.094 sys 0m0.294s 00:27:15.094 23:42:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:15.094 23:42:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:15.094 ************************************ 00:27:15.094 START TEST dd_offset_magic 00:27:15.094 ************************************ 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:27:15.094 23:42:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:27:15.094 { 00:27:15.094 "subsystems": [ 00:27:15.094 { 00:27:15.094 "subsystem": "bdev", 00:27:15.094 "config": [ 00:27:15.094 { 00:27:15.094 "params": { 00:27:15.094 "block_size": 4096, 00:27:15.094 "name": "aio1", 00:27:15.094 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:27:15.094 }, 00:27:15.094 "method": "bdev_aio_create" 00:27:15.094 }, 00:27:15.094 { 00:27:15.094 "params": { 00:27:15.094 "trtype": "pcie", 00:27:15.094 "name": "Nvme0", 00:27:15.094 "traddr": "0000:00:10.0" 00:27:15.094 }, 00:27:15.094 "method": "bdev_nvme_attach_controller" 00:27:15.094 }, 00:27:15.094 { 00:27:15.094 "method": "bdev_wait_for_examine" 00:27:15.094 } 00:27:15.094 ] 00:27:15.094 } 00:27:15.094 ] 00:27:15.094 } 00:27:15.094 [2024-05-14 23:42:38.165587] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:15.094 [2024-05-14 23:42:38.165783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79918 ] 00:27:15.094 [2024-05-14 23:42:38.323510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.352 [2024-05-14 23:42:38.514501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.228  Copying: 65/65 [MB] (average 198 MBps) 00:27:17.228 00:27:17.228 23:42:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:27:17.228 23:42:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:17.228 23:42:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:27:17.228 23:42:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:27:17.487 { 00:27:17.487 "subsystems": [ 00:27:17.487 { 00:27:17.487 "subsystem": "bdev", 00:27:17.487 "config": [ 00:27:17.487 { 00:27:17.487 "params": { 00:27:17.487 "block_size": 4096, 00:27:17.487 "name": "aio1", 00:27:17.487 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:27:17.487 }, 00:27:17.487 "method": "bdev_aio_create" 00:27:17.487 }, 00:27:17.487 { 00:27:17.487 "params": { 00:27:17.487 "trtype": "pcie", 00:27:17.487 "name": "Nvme0", 00:27:17.487 "traddr": "0000:00:10.0" 00:27:17.488 }, 00:27:17.488 "method": "bdev_nvme_attach_controller" 00:27:17.488 }, 00:27:17.488 { 00:27:17.488 "method": "bdev_wait_for_examine" 00:27:17.488 } 00:27:17.488 ] 00:27:17.488 } 00:27:17.488 ] 00:27:17.488 } 00:27:17.488 [2024-05-14 23:42:40.610395] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:17.488 [2024-05-14 23:42:40.610611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79956 ] 00:27:17.488 [2024-05-14 23:42:40.768093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.747 [2024-05-14 23:42:40.978288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.257  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:19.257 00:27:19.257 23:42:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:19.257 23:42:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:19.257 23:42:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:19.257 23:42:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:27:19.257 23:42:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:19.257 23:42:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:27:19.257 23:42:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:27:19.517 { 00:27:19.517 "subsystems": [ 00:27:19.517 { 00:27:19.517 "subsystem": "bdev", 00:27:19.517 "config": [ 00:27:19.517 { 00:27:19.517 "params": { 00:27:19.517 "block_size": 4096, 00:27:19.517 "name": "aio1", 00:27:19.517 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:27:19.517 }, 00:27:19.517 "method": "bdev_aio_create" 00:27:19.517 }, 00:27:19.517 { 00:27:19.517 "params": { 00:27:19.517 "trtype": "pcie", 00:27:19.517 "name": "Nvme0", 00:27:19.517 "traddr": "0000:00:10.0" 00:27:19.517 }, 00:27:19.517 "method": "bdev_nvme_attach_controller" 00:27:19.517 }, 00:27:19.517 { 00:27:19.517 "method": "bdev_wait_for_examine" 00:27:19.517 } 00:27:19.517 ] 00:27:19.517 } 00:27:19.517 ] 00:27:19.517 } 00:27:19.517 [2024-05-14 23:42:42.669972] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:19.517 [2024-05-14 23:42:42.670130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79989 ] 00:27:19.777 [2024-05-14 23:42:42.822138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.777 [2024-05-14 23:42:43.010301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.104  Copying: 65/65 [MB] (average 189 MBps) 00:27:22.104 00:27:22.104 23:42:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:27:22.104 23:42:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:22.104 23:42:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:27:22.104 23:42:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:27:22.104 { 00:27:22.104 "subsystems": [ 00:27:22.104 { 00:27:22.104 "subsystem": "bdev", 00:27:22.104 "config": [ 00:27:22.104 { 00:27:22.104 "params": { 00:27:22.104 "block_size": 4096, 00:27:22.104 "name": "aio1", 00:27:22.104 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:27:22.104 }, 00:27:22.104 "method": "bdev_aio_create" 00:27:22.104 }, 00:27:22.104 { 00:27:22.104 "params": { 00:27:22.104 "trtype": "pcie", 00:27:22.104 "name": "Nvme0", 00:27:22.104 "traddr": "0000:00:10.0" 00:27:22.104 }, 00:27:22.104 "method": "bdev_nvme_attach_controller" 00:27:22.104 }, 00:27:22.104 { 00:27:22.104 "method": "bdev_wait_for_examine" 00:27:22.104 } 00:27:22.104 ] 00:27:22.104 } 00:27:22.104 ] 00:27:22.104 } 00:27:22.104 [2024-05-14 23:42:45.193177] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:22.104 [2024-05-14 23:42:45.193392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80027 ] 00:27:22.104 [2024-05-14 23:42:45.346699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.363 [2024-05-14 23:42:45.574083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.862  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:23.862 00:27:24.121 ************************************ 00:27:24.121 END TEST dd_offset_magic 00:27:24.121 ************************************ 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:24.121 00:27:24.121 real 0m9.131s 00:27:24.121 user 0m7.098s 00:27:24.121 sys 0m1.050s 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:27:24.121 23:42:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:24.121 { 00:27:24.121 "subsystems": [ 00:27:24.121 { 00:27:24.121 "subsystem": "bdev", 00:27:24.121 "config": [ 00:27:24.121 { 00:27:24.121 "params": { 00:27:24.121 "block_size": 4096, 00:27:24.121 "name": "aio1", 00:27:24.121 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:27:24.121 }, 00:27:24.121 "method": "bdev_aio_create" 00:27:24.121 }, 00:27:24.121 { 00:27:24.121 "params": { 00:27:24.121 "trtype": "pcie", 00:27:24.121 "name": "Nvme0", 00:27:24.121 "traddr": "0000:00:10.0" 00:27:24.121 }, 00:27:24.121 "method": "bdev_nvme_attach_controller" 00:27:24.121 }, 00:27:24.121 { 00:27:24.121 "method": "bdev_wait_for_examine" 00:27:24.121 } 00:27:24.121 ] 00:27:24.121 } 00:27:24.121 ] 00:27:24.121 } 00:27:24.121 [2024-05-14 23:42:47.335408] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:24.121 [2024-05-14 23:42:47.335616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80078 ] 00:27:24.380 [2024-05-14 23:42:47.484223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.637 [2024-05-14 23:42:47.692379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.283  Copying: 5120/5120 [kB] (average 1250 MBps) 00:27:26.283 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:27:26.283 23:42:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:26.283 { 00:27:26.283 "subsystems": [ 00:27:26.283 { 00:27:26.283 "subsystem": "bdev", 00:27:26.283 "config": [ 00:27:26.283 { 00:27:26.283 "params": { 00:27:26.283 "block_size": 4096, 00:27:26.283 "name": "aio1", 00:27:26.283 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:27:26.283 }, 00:27:26.283 "method": "bdev_aio_create" 00:27:26.283 }, 00:27:26.283 { 00:27:26.283 "params": { 00:27:26.283 "trtype": "pcie", 00:27:26.283 "name": "Nvme0", 00:27:26.283 "traddr": "0000:00:10.0" 00:27:26.283 }, 00:27:26.283 "method": "bdev_nvme_attach_controller" 00:27:26.283 }, 00:27:26.283 { 00:27:26.283 "method": "bdev_wait_for_examine" 00:27:26.283 } 00:27:26.283 ] 00:27:26.283 } 00:27:26.283 ] 00:27:26.283 } 00:27:26.283 [2024-05-14 23:42:49.411551] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:26.283 [2024-05-14 23:42:49.411736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80111 ] 00:27:26.283 [2024-05-14 23:42:49.562767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.542 [2024-05-14 23:42:49.767436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.487  Copying: 5120/5120 [kB] (average 151 MBps) 00:27:28.487 00:27:28.488 23:42:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:28.488 ************************************ 00:27:28.488 END TEST spdk_dd_bdev_to_bdev 00:27:28.488 ************************************ 00:27:28.488 00:27:28.488 real 0m21.343s 00:27:28.488 user 0m16.843s 00:27:28.488 sys 0m2.696s 00:27:28.488 23:42:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:28.488 23:42:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:28.488 23:42:51 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:27:28.488 23:42:51 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:28.488 23:42:51 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:28.488 23:42:51 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:28.488 23:42:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:27:28.488 ************************************ 00:27:28.488 START TEST spdk_dd_sparse 00:27:28.488 ************************************ 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:28.488 * Looking for test storage... 00:27:28.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:27:28.488 1+0 records in 00:27:28.488 1+0 records out 00:27:28.488 4194304 bytes (4.2 MB) copied, 0.00657146 s, 638 MB/s 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:27:28.488 1+0 records in 00:27:28.488 1+0 records out 00:27:28.488 4194304 bytes (4.2 MB) copied, 0.00573061 s, 732 MB/s 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:27:28.488 1+0 records in 00:27:28.488 1+0 records out 00:27:28.488 4194304 bytes (4.2 MB) copied, 0.00631456 s, 664 MB/s 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:27:28.488 ************************************ 00:27:28.488 START TEST dd_sparse_file_to_file 00:27:28.488 ************************************ 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:27:28.488 23:42:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.488 { 00:27:28.488 "subsystems": [ 00:27:28.488 { 00:27:28.488 "subsystem": "bdev", 00:27:28.488 "config": [ 00:27:28.488 { 00:27:28.488 "params": { 00:27:28.488 "block_size": 4096, 00:27:28.488 "name": "dd_aio", 00:27:28.488 "filename": "dd_sparse_aio_disk" 00:27:28.488 }, 00:27:28.488 "method": "bdev_aio_create" 00:27:28.488 }, 00:27:28.488 { 00:27:28.488 "params": { 00:27:28.488 "bdev_name": "dd_aio", 00:27:28.488 "lvs_name": "dd_lvstore" 00:27:28.488 }, 00:27:28.488 "method": "bdev_lvol_create_lvstore" 00:27:28.488 }, 00:27:28.488 { 00:27:28.488 "method": "bdev_wait_for_examine" 00:27:28.488 } 00:27:28.488 ] 00:27:28.488 } 00:27:28.488 ] 00:27:28.488 } 00:27:28.747 [2024-05-14 23:42:51.786717] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:28.747 [2024-05-14 23:42:51.786909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80205 ] 00:27:28.747 [2024-05-14 23:42:51.936751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.005 [2024-05-14 23:42:52.154786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.947  Copying: 12/36 [MB] (average 1200 MBps) 00:27:30.947 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:30.947 ************************************ 00:27:30.947 END TEST dd_sparse_file_to_file 00:27:30.947 ************************************ 00:27:30.947 00:27:30.947 real 0m2.243s 00:27:30.947 user 0m1.798s 00:27:30.947 sys 0m0.299s 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 ************************************ 00:27:30.947 START TEST dd_sparse_file_to_bdev 00:27:30.947 ************************************ 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size_in_mib"]=36 ["thin_provision"]=true) 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:27:30.947 23:42:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:30.947 { 00:27:30.947 "subsystems": [ 00:27:30.947 { 00:27:30.947 "subsystem": "bdev", 00:27:30.947 "config": [ 00:27:30.947 { 00:27:30.947 "params": { 00:27:30.947 "block_size": 4096, 00:27:30.947 "name": "dd_aio", 00:27:30.947 "filename": "dd_sparse_aio_disk" 00:27:30.947 }, 00:27:30.947 "method": "bdev_aio_create" 00:27:30.947 }, 00:27:30.947 { 00:27:30.947 "params": { 00:27:30.947 "size_in_mib": 36, 00:27:30.947 "thin_provision": true, 00:27:30.947 "lvol_name": "dd_lvol", 00:27:30.947 "lvs_name": "dd_lvstore" 00:27:30.947 }, 00:27:30.947 "method": "bdev_lvol_create" 00:27:30.947 }, 00:27:30.947 { 00:27:30.947 "method": "bdev_wait_for_examine" 00:27:30.947 } 00:27:30.947 ] 00:27:30.947 } 00:27:30.947 ] 00:27:30.947 } 00:27:30.947 [2024-05-14 23:42:54.074787] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:30.947 [2024-05-14 23:42:54.074982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80277 ] 00:27:30.947 [2024-05-14 23:42:54.226593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.206 [2024-05-14 23:42:54.430719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.150  Copying: 12/36 [MB] (average 600 MBps) 00:27:33.150 00:27:33.150 ************************************ 00:27:33.150 END TEST dd_sparse_file_to_bdev 00:27:33.150 ************************************ 00:27:33.150 00:27:33.150 real 0m2.235s 00:27:33.150 user 0m1.851s 00:27:33.150 sys 0m0.245s 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:27:33.150 ************************************ 00:27:33.150 START TEST dd_sparse_bdev_to_file 00:27:33.150 ************************************ 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:27:33.150 23:42:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:27:33.150 { 00:27:33.150 "subsystems": [ 00:27:33.150 { 00:27:33.150 "subsystem": "bdev", 00:27:33.150 "config": [ 00:27:33.150 { 00:27:33.150 "params": { 00:27:33.150 "block_size": 4096, 00:27:33.150 "name": "dd_aio", 00:27:33.150 "filename": "dd_sparse_aio_disk" 00:27:33.150 }, 00:27:33.150 "method": "bdev_aio_create" 00:27:33.150 }, 00:27:33.150 { 00:27:33.150 "method": "bdev_wait_for_examine" 00:27:33.150 } 00:27:33.150 ] 00:27:33.150 } 00:27:33.150 ] 00:27:33.150 } 00:27:33.150 [2024-05-14 23:42:56.356735] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:33.150 [2024-05-14 23:42:56.356910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80338 ] 00:27:33.409 [2024-05-14 23:42:56.507855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.666 [2024-05-14 23:42:56.720965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.300  Copying: 12/36 [MB] (average 1333 MBps) 00:27:35.300 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:35.300 00:27:35.300 real 0m2.172s 00:27:35.300 user 0m1.758s 00:27:35.300 sys 0m0.261s 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:27:35.300 ************************************ 00:27:35.300 END TEST dd_sparse_bdev_to_file 00:27:35.300 ************************************ 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:27:35.300 00:27:35.300 real 0m6.940s 00:27:35.300 user 0m5.494s 00:27:35.300 sys 0m0.985s 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:35.300 23:42:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:27:35.300 ************************************ 00:27:35.300 END TEST spdk_dd_sparse 00:27:35.300 ************************************ 00:27:35.300 23:42:58 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:35.300 23:42:58 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:35.300 23:42:58 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.300 23:42:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:27:35.300 ************************************ 00:27:35.300 START TEST spdk_dd_negative 00:27:35.300 ************************************ 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:35.300 * Looking for test storage... 00:27:35.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:27:35.300 23:42:58 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:27:35.301 23:42:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:35.301 23:42:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:35.301 23:42:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:35.301 23:42:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:35.301 23:42:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:27:35.301 23:42:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:35.301 23:42:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.301 23:42:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:35.560 ************************************ 00:27:35.560 START TEST dd_invalid_arguments 00:27:35.560 ************************************ 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:35.560 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:35.560 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:27:35.560 00:27:35.560 CPU options: 00:27:35.560 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:27:35.560 (like [0,1,10]) 00:27:35.560 --lcores lcore to CPU mapping list. The list is in the format: 00:27:35.560 [<,lcores[@CPUs]>...] 00:27:35.560 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:27:35.560 Within the group, '-' is used for range separator, 00:27:35.560 ',' is used for single number separator. 00:27:35.560 '( )' can be omitted for single element group, 00:27:35.560 '@' can be omitted if cpus and lcores have the same value 00:27:35.560 --disable-cpumask-locks Disable CPU core lock files. 00:27:35.560 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:27:35.560 pollers in the app support interrupt mode) 00:27:35.560 -p, --main-core main (primary) core for DPDK 00:27:35.560 00:27:35.560 Configuration options: 00:27:35.560 -c, --config, --json JSON config file 00:27:35.560 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:27:35.560 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:27:35.560 --wait-for-rpc wait for RPCs to initialize subsystems 00:27:35.560 --rpcs-allowed comma-separated list of permitted RPCS 00:27:35.560 --json-ignore-init-errors don't exit on invalid config entry 00:27:35.560 00:27:35.560 Memory options: 00:27:35.560 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:27:35.560 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:27:35.560 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:27:35.560 -R, --huge-unlink unlink huge files after initialization 00:27:35.560 -n, --mem-channels number of memory channels used for DPDK 00:27:35.560 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:27:35.560 --msg-mempool-size global message memory pool size in count (default: 262143) 00:27:35.560 --no-huge run without using hugepages 00:27:35.560 -i, --shm-id shared memory ID (optional) 00:27:35.560 -g, --single-file-segments force creating just one hugetlbfs file 00:27:35.560 00:27:35.560 PCI options: 00:27:35.560 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:27:35.560 -B, --pci-blocked pci addr to block (can be used more than once) 00:27:35.560 -u, --no-pci disable PCI access 00:27:35.560 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:27:35.560 00:27:35.560 Log options: 00:27:35.560 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:27:35.560 app_config, app_rpc, bdev, bdev_concat, bdev_daos, bdev_ftl, 00:27:35.560 bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, 00:27:35.560 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:27:35.560 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:27:35.560 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:27:35.560 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:27:35.560 thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:27:35.560 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:27:35.560 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:27:35.560 virtio_vfio_user, vmd) 00:27:35.560 --silence-noticelog disable notice level logging to stderr 00:27:35.560 00:27:35.560 Trace options: 00:27:35.560 --num-trace-entries number of trace entries for each core, must be power of 2, 00:27:35.560 setting 0 to disable trace (default 32768) 00:27:35.560 Tracepoints vary in size and can use more than one trace entry. 00:27:35.560 -e, --tpoint-group [:] 00:27:35.560 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:27:35.560 blobf/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:27:35.560 [2024-05-14 23:42:58.729988] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:27:35.560 s, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:27:35.560 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:27:35.560 a tracepoint group. First tpoint inside a group can be enabled by 00:27:35.560 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:27:35.560 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:27:35.561 in /include/spdk_internal/trace_defs.h 00:27:35.561 00:27:35.561 Other options: 00:27:35.561 -h, --help show this usage 00:27:35.561 -v, --version print SPDK version 00:27:35.561 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:27:35.561 --env-context Opaque context for use of the env implementation 00:27:35.561 00:27:35.561 Application specific: 00:27:35.561 [--------- DD Options ---------] 00:27:35.561 --if Input file. Must specify either --if or --ib. 00:27:35.561 --ib Input bdev. Must specifier either --if or --ib 00:27:35.561 --of Output file. Must specify either --of or --ob. 00:27:35.561 --ob Output bdev. Must specify either --of or --ob. 00:27:35.561 --iflag Input file flags. 00:27:35.561 --oflag Output file flags. 00:27:35.561 --bs I/O unit size (default: 4096) 00:27:35.561 --qd Queue depth (default: 2) 00:27:35.561 --count I/O unit count. The number of I/O units to copy. (default: all) 00:27:35.561 --skip Skip this many I/O units at start of input. (default: 0) 00:27:35.561 --seek Skip this many I/O units at start of output. (default: 0) 00:27:35.561 --aio Force usage of AIO. (by default io_uring is used if available) 00:27:35.561 --sparse Enable hole skipping in input target 00:27:35.561 Available iflag and oflag values: 00:27:35.561 append - append mode 00:27:35.561 direct - use direct I/O for data 00:27:35.561 directory - fail unless a directory 00:27:35.561 dsync - use synchronized I/O for data 00:27:35.561 noatime - do not update access time 00:27:35.561 noctty - do not assign controlling terminal from file 00:27:35.561 nofollow - do not follow symlinks 00:27:35.561 nonblock - use non-blocking I/O 00:27:35.561 sync - use synchronized I/O for data and metadata 00:27:35.561 ************************************ 00:27:35.561 END TEST dd_invalid_arguments 00:27:35.561 ************************************ 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.561 00:27:35.561 real 0m0.163s 00:27:35.561 user 0m0.032s 00:27:35.561 sys 0m0.034s 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:35.561 ************************************ 00:27:35.561 START TEST dd_double_input 00:27:35.561 ************************************ 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:35.561 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:35.819 [2024-05-14 23:42:58.940464] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:27:35.819 ************************************ 00:27:35.819 END TEST dd_double_input 00:27:35.819 ************************************ 00:27:35.819 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:27:35.819 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.819 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.819 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.819 00:27:35.819 real 0m0.166s 00:27:35.819 user 0m0.032s 00:27:35.819 sys 0m0.037s 00:27:35.819 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:35.819 23:42:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:35.819 ************************************ 00:27:35.819 START TEST dd_double_output 00:27:35.819 ************************************ 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.819 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.820 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:35.820 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:36.079 [2024-05-14 23:42:59.154706] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:36.079 00:27:36.079 real 0m0.178s 00:27:36.079 user 0m0.041s 00:27:36.079 sys 0m0.041s 00:27:36.079 ************************************ 00:27:36.079 END TEST dd_double_output 00:27:36.079 ************************************ 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:36.079 ************************************ 00:27:36.079 START TEST dd_no_input 00:27:36.079 ************************************ 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:36.079 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:36.338 [2024-05-14 23:42:59.382609] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:27:36.338 ************************************ 00:27:36.338 END TEST dd_no_input 00:27:36.338 ************************************ 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:36.338 00:27:36.338 real 0m0.177s 00:27:36.338 user 0m0.043s 00:27:36.338 sys 0m0.037s 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:36.338 ************************************ 00:27:36.338 START TEST dd_no_output 00:27:36.338 ************************************ 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:36.338 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:36.338 [2024-05-14 23:42:59.605235] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:27:36.597 ************************************ 00:27:36.597 END TEST dd_no_output 00:27:36.597 ************************************ 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:36.597 00:27:36.597 real 0m0.170s 00:27:36.597 user 0m0.034s 00:27:36.597 sys 0m0.040s 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:36.597 ************************************ 00:27:36.597 START TEST dd_wrong_blocksize 00:27:36.597 ************************************ 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:36.597 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:36.598 [2024-05-14 23:42:59.822440] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:36.598 00:27:36.598 real 0m0.166s 00:27:36.598 user 0m0.033s 00:27:36.598 sys 0m0.036s 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.598 ************************************ 00:27:36.598 END TEST dd_wrong_blocksize 00:27:36.598 ************************************ 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.598 23:42:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:36.857 ************************************ 00:27:36.857 START TEST dd_smaller_blocksize 00:27:36.857 ************************************ 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:36.857 23:42:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:36.857 [2024-05-14 23:43:00.038013] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:36.857 [2024-05-14 23:43:00.038434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80624 ] 00:27:37.115 [2024-05-14 23:43:00.186854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.115 [2024-05-14 23:43:00.379571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.683 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:27:37.683 [2024-05-14 23:43:00.915554] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:27:37.683 [2024-05-14 23:43:00.915676] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:38.619 [2024-05-14 23:43:01.699528] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:27:38.878 ************************************ 00:27:38.878 END TEST dd_smaller_blocksize 00:27:38.878 ************************************ 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:38.878 00:27:38.878 real 0m2.155s 00:27:38.878 user 0m1.573s 00:27:38.878 sys 0m0.383s 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 ************************************ 00:27:38.878 START TEST dd_invalid_count 00:27:38.878 ************************************ 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:38.878 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:39.137 [2024-05-14 23:43:02.239318] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.137 00:27:39.137 real 0m0.164s 00:27:39.137 user 0m0.034s 00:27:39.137 sys 0m0.034s 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:27:39.137 ************************************ 00:27:39.137 END TEST dd_invalid_count 00:27:39.137 ************************************ 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:39.137 ************************************ 00:27:39.137 START TEST dd_invalid_oflag 00:27:39.137 ************************************ 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:27:39.137 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:39.138 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:39.397 [2024-05-14 23:43:02.446948] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.397 00:27:39.397 real 0m0.160s 00:27:39.397 user 0m0.031s 00:27:39.397 sys 0m0.033s 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:27:39.397 ************************************ 00:27:39.397 END TEST dd_invalid_oflag 00:27:39.397 ************************************ 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:39.397 ************************************ 00:27:39.397 START TEST dd_invalid_iflag 00:27:39.397 ************************************ 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:39.397 [2024-05-14 23:43:02.655773] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.397 00:27:39.397 real 0m0.166s 00:27:39.397 user 0m0.038s 00:27:39.397 sys 0m0.032s 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:39.397 23:43:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:27:39.397 ************************************ 00:27:39.397 END TEST dd_invalid_iflag 00:27:39.656 ************************************ 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:39.656 ************************************ 00:27:39.656 START TEST dd_unknown_flag 00:27:39.656 ************************************ 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:39.656 23:43:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:39.656 [2024-05-14 23:43:02.873695] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:39.656 [2024-05-14 23:43:02.873899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80754 ] 00:27:39.915 [2024-05-14 23:43:03.029994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.173 [2024-05-14 23:43:03.246487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.429 [2024-05-14 23:43:03.603262] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:27:40.429 [2024-05-14 23:43:03.603368] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:40.429  Copying: 0/0 [B] (average 0 Bps)[2024-05-14 23:43:03.603521] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:27:41.363 [2024-05-14 23:43:04.432956] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:27:41.623 00:27:41.623 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:41.623 00:27:41.623 real 0m2.089s 00:27:41.623 user 0m1.658s 00:27:41.623 sys 0m0.231s 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:27:41.623 ************************************ 00:27:41.623 END TEST dd_unknown_flag 00:27:41.623 ************************************ 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:41.623 ************************************ 00:27:41.623 START TEST dd_invalid_json 00:27:41.623 ************************************ 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:41.623 23:43:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:41.886 [2024-05-14 23:43:05.021025] Starting SPDK v24.05-pre git sha1 e8841656d / DPDK 23.11.0 initialization... 00:27:41.886 [2024-05-14 23:43:05.021228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80801 ] 00:27:42.145 [2024-05-14 23:43:05.174945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.145 [2024-05-14 23:43:05.392994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.145 [2024-05-14 23:43:05.393120] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:27:42.145 [2024-05-14 23:43:05.393168] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:42.145 [2024-05-14 23:43:05.393191] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:42.145 [2024-05-14 23:43:05.393272] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:42.713 00:27:42.713 real 0m0.897s 00:27:42.713 user 0m0.581s 00:27:42.713 sys 0m0.120s 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:42.713 ************************************ 00:27:42.713 END TEST dd_invalid_json 00:27:42.713 ************************************ 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:27:42.713 00:27:42.713 real 0m7.312s 00:27:42.713 user 0m4.345s 00:27:42.713 sys 0m1.463s 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:42.713 23:43:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:27:42.713 ************************************ 00:27:42.713 END TEST spdk_dd_negative 00:27:42.713 ************************************ 00:27:42.713 00:27:42.713 real 2m52.719s 00:27:42.713 user 2m15.399s 00:27:42.713 sys 0m20.702s 00:27:42.713 ************************************ 00:27:42.713 END TEST spdk_dd 00:27:42.713 ************************************ 00:27:42.713 23:43:05 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:42.713 23:43:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:27:42.713 23:43:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@256 -- # timing_exit lib 00:27:42.713 23:43:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.713 23:43:05 -- common/autotest_common.sh@10 -- # set +x 00:27:42.713 23:43:05 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:42.713 23:43:05 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:27:42.713 23:43:05 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:42.713 23:43:05 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:42.713 23:43:05 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:42.713 23:43:05 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:27:42.713 23:43:05 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:27:42.713 23:43:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:42.713 23:43:05 -- common/autotest_common.sh@10 -- # set +x 00:27:42.713 23:43:05 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:27:42.713 23:43:05 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:27:42.713 23:43:05 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:27:42.713 23:43:05 -- common/autotest_common.sh@10 -- # set +x 00:27:43.650 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:27:43.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:27:43.650 Waiting for block devices as requested 00:27:43.650 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:43.909 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:27:43.909 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:27:43.909 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:27:44.168 Cleaning 00:27:44.168 Removing: /var/run/dpdk/spdk0/config 00:27:44.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:44.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:44.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:44.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:44.168 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:44.168 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:44.168 Removing: /dev/shm/spdk_tgt_trace.pid45958 00:27:44.168 Removing: /var/run/dpdk/spdk0 00:27:44.168 Removing: /var/run/dpdk/spdk_pid45690 00:27:44.168 Removing: /var/run/dpdk/spdk_pid45958 00:27:44.168 Removing: /var/run/dpdk/spdk_pid46224 00:27:44.168 Removing: /var/run/dpdk/spdk_pid46335 00:27:44.168 Removing: /var/run/dpdk/spdk_pid46401 00:27:44.168 Removing: /var/run/dpdk/spdk_pid46542 00:27:44.168 Removing: /var/run/dpdk/spdk_pid46567 00:27:44.168 Removing: /var/run/dpdk/spdk_pid46751 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47015 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47209 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47332 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47446 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47575 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47704 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47758 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47802 00:27:44.168 Removing: /var/run/dpdk/spdk_pid47891 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48040 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48131 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48215 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48242 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48424 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48444 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48615 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48636 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48719 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48751 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48820 00:27:44.168 Removing: /var/run/dpdk/spdk_pid48843 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49060 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49104 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49145 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49242 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49336 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49381 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49480 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49531 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49600 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49655 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49708 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49772 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49826 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49884 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49940 00:27:44.168 Removing: /var/run/dpdk/spdk_pid49998 00:27:44.168 Removing: /var/run/dpdk/spdk_pid50051 00:27:44.168 Removing: /var/run/dpdk/spdk_pid50114 00:27:44.168 Removing: /var/run/dpdk/spdk_pid50165 00:27:44.168 Removing: /var/run/dpdk/spdk_pid50223 00:27:44.168 Removing: /var/run/dpdk/spdk_pid50328 00:27:44.168 Removing: /var/run/dpdk/spdk_pid50472 00:27:44.168 Removing: /var/run/dpdk/spdk_pid50682 00:27:44.169 Removing: /var/run/dpdk/spdk_pid50776 00:27:44.169 Removing: /var/run/dpdk/spdk_pid50845 00:27:44.169 Removing: /var/run/dpdk/spdk_pid50984 00:27:44.169 Removing: /var/run/dpdk/spdk_pid51220 00:27:44.169 Removing: /var/run/dpdk/spdk_pid51430 00:27:44.169 Removing: /var/run/dpdk/spdk_pid51564 00:27:44.169 Removing: /var/run/dpdk/spdk_pid51708 00:27:44.169 Removing: /var/run/dpdk/spdk_pid51791 00:27:44.169 Removing: /var/run/dpdk/spdk_pid51820 00:27:44.169 Removing: /var/run/dpdk/spdk_pid51859 00:27:44.169 Removing: /var/run/dpdk/spdk_pid52342 00:27:44.169 Removing: /var/run/dpdk/spdk_pid52436 00:27:44.169 Removing: /var/run/dpdk/spdk_pid52560 00:27:44.169 Removing: /var/run/dpdk/spdk_pid52627 00:27:44.169 Removing: /var/run/dpdk/spdk_pid53644 00:27:44.169 Removing: /var/run/dpdk/spdk_pid54778 00:27:44.169 Removing: /var/run/dpdk/spdk_pid55938 00:27:44.169 Removing: /var/run/dpdk/spdk_pid58414 00:27:44.169 Removing: /var/run/dpdk/spdk_pid60908 00:27:44.169 Removing: /var/run/dpdk/spdk_pid63391 00:27:44.169 Removing: /var/run/dpdk/spdk_pid66423 00:27:44.169 Removing: /var/run/dpdk/spdk_pid69227 00:27:44.169 Removing: /var/run/dpdk/spdk_pid71999 00:27:44.169 Removing: /var/run/dpdk/spdk_pid73290 00:27:44.169 Removing: /var/run/dpdk/spdk_pid74152 00:27:44.169 Removing: /var/run/dpdk/spdk_pid75008 00:27:44.169 Removing: /var/run/dpdk/spdk_pid75484 00:27:44.169 Removing: /var/run/dpdk/spdk_pid76369 00:27:44.169 Removing: /var/run/dpdk/spdk_pid76427 00:27:44.169 Removing: /var/run/dpdk/spdk_pid76490 00:27:44.169 Removing: /var/run/dpdk/spdk_pid76551 00:27:44.169 Removing: /var/run/dpdk/spdk_pid76706 00:27:44.169 Removing: /var/run/dpdk/spdk_pid76854 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77085 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77347 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77371 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77456 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77497 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77533 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77573 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77604 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77633 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77675 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77707 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77740 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77779 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77812 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77843 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77882 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77914 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77947 00:27:44.169 Removing: /var/run/dpdk/spdk_pid77988 00:27:44.169 Removing: /var/run/dpdk/spdk_pid78019 00:27:44.169 Removing: /var/run/dpdk/spdk_pid78048 00:27:44.169 Removing: /var/run/dpdk/spdk_pid78100 00:27:44.169 Removing: /var/run/dpdk/spdk_pid78146 00:27:44.169 Removing: /var/run/dpdk/spdk_pid78190 00:27:44.169 Removing: /var/run/dpdk/spdk_pid78283 00:27:44.169 Removing: /var/run/dpdk/spdk_pid78337 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78364 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78418 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78446 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78473 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78537 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78569 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78620 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78649 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78683 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78707 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78739 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78767 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78792 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78821 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78872 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78923 00:27:44.428 Removing: /var/run/dpdk/spdk_pid78956 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79001 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79036 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79067 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79136 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79167 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79210 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79243 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79274 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79299 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79327 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79352 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79381 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79410 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79516 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79604 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79755 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79797 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79857 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79918 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79956 00:27:44.428 Removing: /var/run/dpdk/spdk_pid79989 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80027 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80078 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80111 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80205 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80277 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80338 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80624 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80754 00:27:44.428 Removing: /var/run/dpdk/spdk_pid80801 00:27:44.428 Clean 00:27:44.428 /home/vagrant/spdk_repo/spdk/scripts/../scripts/common.sh: line 504: /proc/sys/kernel/printk_devkmsg: No such file or directory 00:27:44.428 23:43:07 -- common/autotest_common.sh@1447 -- # return 0 00:27:44.428 23:43:07 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:27:44.428 23:43:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.428 23:43:07 -- common/autotest_common.sh@10 -- # set +x 00:27:44.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 678: 36908 Terminated ${SUDO[MONITOR_RESOURCES_SUDO["$monitor"]]} "$_pmdir/$monitor" -d "$PM_OUTPUTDIR" -l -p "monitor.${0##*/}.$(date +%s)" (wd: /home/vagrant/spdk_repo) 00:27:44.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 678: 36909 Terminated ${SUDO[MONITOR_RESOURCES_SUDO["$monitor"]]} "$_pmdir/$monitor" -d "$PM_OUTPUTDIR" -l -p "monitor.${0##*/}.$(date +%s)" (wd: /home/vagrant/spdk_repo) 00:27:44.428 23:43:07 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:27:44.428 23:43:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.428 23:43:07 -- common/autotest_common.sh@10 -- # set +x 00:27:44.428 23:43:07 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:44.428 23:43:07 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:44.428 23:43:07 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:44.428 23:43:07 -- spdk/autotest.sh@387 -- # hash lcov 00:27:44.428 23:43:07 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:44.428 23:43:07 -- spdk/autotest.sh@389 -- # hostname 00:27:44.428 23:43:07 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:44.687 geninfo: WARNING: invalid characters removed from testname! 00:28:40.927 23:44:00 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:44.214 23:44:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:47.608 23:44:10 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:50.894 23:44:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:54.183 23:44:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:57.469 23:44:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:00.829 23:44:23 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:00.829 23:44:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:00.829 23:44:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:00.829 23:44:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.829 23:44:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.829 23:44:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:29:00.829 23:44:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:29:00.829 23:44:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:29:00.829 23:44:23 -- paths/export.sh@5 -- $ export PATH 00:29:00.829 23:44:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:29:00.829 23:44:23 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:00.829 23:44:23 -- common/autobuild_common.sh@437 -- $ date +%s 00:29:00.829 23:44:23 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715730263.XXXXXX 00:29:00.829 23:44:23 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715730263.rICJNx 00:29:00.829 23:44:23 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:29:00.829 23:44:23 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:29:00.829 23:44:23 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:00.829 23:44:23 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:00.829 23:44:23 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:00.829 23:44:23 -- common/autobuild_common.sh@453 -- $ get_config_params 00:29:00.829 23:44:23 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:29:00.829 23:44:23 -- common/autotest_common.sh@10 -- $ set +x 00:29:00.829 23:44:23 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:29:00.829 23:44:23 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:29:00.829 23:44:23 -- pm/common@17 -- $ local monitor 00:29:00.829 23:44:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.829 23:44:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.829 23:44:23 -- pm/common@25 -- $ sleep 1 00:29:00.829 23:44:23 -- pm/common@21 -- $ date +%s 00:29:00.829 23:44:23 -- pm/common@21 -- $ date +%s 00:29:00.829 23:44:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715730263 00:29:00.829 23:44:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715730263 00:29:00.829 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715730263_collect-vmstat.pm.log 00:29:00.829 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715730263_collect-cpu-load.pm.log 00:29:01.776 23:44:24 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:01.776 23:44:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:01.776 23:44:24 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:01.776 23:44:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:01.776 23:44:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:01.776 23:44:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:01.776 23:44:24 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:01.776 23:44:24 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:01.776 23:44:24 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:01.776 23:44:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:01.776 23:44:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:01.776 23:44:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:01.776 23:44:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:01.776 23:44:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:01.776 23:44:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:01.776 23:44:25 -- pm/common@44 -- $ pid=82139 00:29:01.776 23:44:25 -- pm/common@50 -- $ kill -TERM 82139 00:29:01.776 23:44:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:01.776 23:44:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:01.776 23:44:25 -- pm/common@44 -- $ pid=82140 00:29:01.776 23:44:25 -- pm/common@50 -- $ kill -TERM 82140 00:29:01.776 + [[ -n 2622 ]] 00:29:01.776 + sudo kill 2622 00:29:02.035 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:29:02.044 [Pipeline] } 00:29:02.063 [Pipeline] // timeout 00:29:02.069 [Pipeline] } 00:29:02.087 [Pipeline] // stage 00:29:02.092 [Pipeline] } 00:29:02.108 [Pipeline] // catchError 00:29:02.117 [Pipeline] stage 00:29:02.118 [Pipeline] { (Stop VM) 00:29:02.132 [Pipeline] sh 00:29:02.410 + vagrant halt 00:29:07.684 ==> default: Halting domain... 00:29:12.967 [Pipeline] sh 00:29:13.248 + vagrant destroy -f 00:29:17.437 ==> default: Removing domain... 00:29:17.709 [Pipeline] sh 00:29:17.988 + mv output /var/jenkins/workspace/centos7-vg-autotest/output 00:29:18.001 [Pipeline] } 00:29:18.019 [Pipeline] // stage 00:29:18.024 [Pipeline] } 00:29:18.041 [Pipeline] // dir 00:29:18.046 [Pipeline] } 00:29:18.065 [Pipeline] // wrap 00:29:18.071 [Pipeline] } 00:29:18.086 [Pipeline] // catchError 00:29:18.095 [Pipeline] stage 00:29:18.097 [Pipeline] { (Epilogue) 00:29:18.111 [Pipeline] sh 00:29:18.392 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:40.362 [Pipeline] catchError 00:29:40.363 [Pipeline] { 00:29:40.374 [Pipeline] sh 00:29:40.648 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:40.648 Artifacts sizes are good 00:29:40.656 [Pipeline] } 00:29:40.673 [Pipeline] // catchError 00:29:40.682 [Pipeline] archiveArtifacts 00:29:40.688 Archiving artifacts 00:29:41.008 [Pipeline] cleanWs 00:29:41.017 [WS-CLEANUP] Deleting project workspace... 00:29:41.017 [WS-CLEANUP] Deferred wipeout is used... 00:29:41.022 [WS-CLEANUP] done 00:29:41.026 [Pipeline] } 00:29:41.043 [Pipeline] // stage 00:29:41.048 [Pipeline] } 00:29:41.064 [Pipeline] // node 00:29:41.071 [Pipeline] End of Pipeline 00:29:41.105 Finished: SUCCESS